Blog List

Sunday, 22 October 2017

Engineering the Next Generation of Brain Scientists

Author
BrianLitt1
1
Penn Epilepsy Center, Departments of Neurology, Neurosurgery and Bioengineering, Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104, USA
Available online 8 April 2015.
New technologies to probe the nervous system are propelling innovation and discovery at blinding speed, but are our trainees prepared to maximize this power? The growing role of engineering in research, such as materials, computing, electronics, and devices, compels us to rethink neuroscience education. Core technology requirements, cross-disciplinary education, open-source resources, and experiential learning are new ways we can efficiently equip future leaders to make the next disruptive discoveries.

Main Text

The Challenge

In 1888, when the young Ramon y Cajal first published evidence that neurons were discrete directional conductors, he leveraged a new technology, Golgi’s silver stain, to liberate his vision and creativity (Andres-Barquin, 2002). Cajal visited Golgi in Madrid and quickly mastered the technique. He improved it through repetitive staining and made observations that gave birth to modern neuroscience, winning him the Nobel Prize in Physiology and Medicine in 1906. Fast-forward 127 years, and the story is pretty much the same. New technologies like optogenetics, nanotechnology, graphene, and cloud computing focus students on questions that were unapproachable 5 years ago. Different from Cajal’s time, new approaches spread at the speed of light over digital media. Entrepreneurship, rapid fabrication, and downloadable computer code accelerate their adoption. As with Cajal, young scientists embrace technological advances and turn them on interesting problems in novel ways. The challenge today is that as new technologies become increasingly complex and expensive, it is no longer possible to master them alone in a reasonable amount of time.
Preparation for careers in neuroscience has traditionally been diverse: combining life and research experience with broad competency in math, science, and writing, as measured by the Graduate Record Examinations (GRE). The depth and breadth of knowledge required of modern neuroscientists, however, far exceeds these core requirements, putting tremendous burden on PhD programs to fill the gaps. Though we embrace the diversity of thought that varied backgrounds bring to neuroscience, it is vital that we reconsider what skills our students need to succeed. This includes evaluating their education prior to graduate study and how to prepare them as effectively and efficiently as possible during Master’s programs and PhD training. We must address these issues soon if we want to optimize the rate, efficiency, and cost of neuroscience discovery and clinical translation.
For neuroscientists, a major change since Cajal’s time is the complexity of the scientific techniques and equipment we use and the knowledge and cost required to harness their power. While not trivial, Cajal easily found the time, funds, and intellectual energy to master Golgi’s technique alone in a short time. His materials were commonplace and easily acquired. Compare this to the resources necessary to acquire and process scanned, digitized volume images from Karl Deisseroth’s “Clarity,” Cajal’s modern equivalent, Mark Schnitzer’s continuous in vivo calcium images of place cell populations, or to reconstruct in real time the activity dictating movement in seven degrees of freedom from hundred contact multielectrode arrays (Chung and Deisseroth, 2013; Collinger et al., 2013; Ziv et al., 2013). These approaches are orders of magnitude more complex than Cajal’s, and they are expensive, even adjusting for inflation. Operating the arrays, cameras, scopes, data acquisition, and processing equipment required for any of the above experiments would take years for even the most brilliant investigators to master alone, without the right preparation. Equally important, cutting corners on basic technical education risks training sophisticated equipment “operators” who are easy prey to technical and conceptual errors. For knowledge to grow, this group must have the insight to push state of the art tools to their performance limits as we strive to map human behavior and disease to the resolution of single cells.
To add to the challenge, the pressures and duration of neuroscience training are increasing. Students enter PhD or MD-PhD programs later, after time off to work, learn, and sort out the dizzying array of career options available in science. In addition, the body of accumulated neuroscience knowledge alone can take years to master, independent of technology training. Changes in laboratory roles also impact our educational needs. The reality of academic research careers in many countries is that “hands-on” experiments often rest more in the hands of PhD students and postdocs than principal investigators, as the latter spend more time writing grant proposals and negotiating an increasingly complex administrative and regulatory landscape. Trainees, particularly senior students and postdocs, often become the “go to” authorities for implementing new techniques.
One answer to the technological “skills gap” in neuroscience might be to recruit more students for technologically intensive research laboratories with degrees in engineering, computer science, mathematics, or physics. This is already occurring and is one catalyst growing the multidisciplinary field of “neuroengineering.” Recruiting from this pool could be accentuated by tracking elite undergraduates in these fields into neuroscience early, through defined minors or concentrations, in their undergraduate curricula. Such programs might even directly feed neuroscience graduate programs, perhaps guaranteeing these students admission into PhD training after the second year in college, requiring only sustained high levels of performance through graduation for matriculation. Another option is to broaden pre-PhD education requirements or to revise recommended educational guidelines to better prepare students from diverse backgrounds for graduate study in the neurosciences. While some mix of these two approaches is likely to be most efficient, one could argue that the nature of modern neuroscience research is such that all trainees in this discipline should have a core technology component to their education, both prior to admission to PhD programs and during them. This could also strengthen PhD students who enter with pure biology or psychology degrees, who may have less technical preparation than their “harder science” trained colleagues.

What They Need

Neuroscience research is, by virtue of our expanding knowledge, growing more diverse and subspecialized. Training in specific fields, such as in behavioral, cellular, or systems neuroscience, is traditionally hands on and individualized to specific laboratories. Many principal investigators promote an apprenticeship model, where students acquire the necessary skills over a prolonged period of time. While effective in some ways, this approach can create knowledge gaps outside of a specific lab’s focus. This approach can also prolong PhD training during the thesis years. A careful look suggests that there are basic technical skills common to all of these fields that are necessary to succeed. Appropriate preparation in core skills could allow students to “hit the ground running” in their research and shorten the duration of training.
The process begins with formulating the question to be addressed by the research. This step is now quite different than in Cajal’s time, as the myriad of techniques available are intimately involved in this process. One could argue that good hypotheses are predicated upon a state of the art knowledge of what is technically possible, perhaps conveyed through an overview course in the technology of neuroscience. Such material could either stand alone or be incorporated explicitly in the basic neuroscience curriculum delivered in college or first year PhD training. In parallel to this broad knowledge, core technology requirements for the experimental experience can be broken down into three basic research functions: (1) data acquisition, (2) data wrangling, and (3) analysis and interpretation. Even in laboratories where not all of these activities take place, for example those focused on computational modeling, one could argue that basic knowledge in these three domains is essential to being a competent neuroscientist. This also holds true for the genetic and molecular cores of neuroscience, which also rely more and more on data and computing resources.

Data Acquisition

Acquiring data, even in laboratories using traditional techniques, increasingly involves arrays of sensors, stimulating or activating devices, imaging, digital sampling, recording, and storage systems. Sensors may be quite disparate, ranging from huge multielectrode electrophysiology arrays to optical elements, cameras, and imaging devices, including those used in genetic and molecular investigation. Devices that actually touch neural tissue may be made of new materials whose properties need to be understood, particularly biocompatibility, mechanics, electrical characteristics, and durability. The equipment involved in connecting sensors to data acquisition units also has basic components that must be understood, such as amplifiers, filters, analog to digital converters, connection to storage (e.g., wireless, wired), and noise cancellation circuitry. Education regarding safety features (shielding, fuses, stimulation limiters, etc.), maintenance requirements, and reusable or replaceable components is also important, though much of this falls under the purview of technicians in many labs.
Some of the courses required, for example fundamental classes in math, physics, and chemistry, in addition to biology, are common to most students who enter neuroscience. What is missing in many curricula are basic courses in experimental instrumentation. One approach might be to mimic basic hands-on courses given to engineering undergraduates. Such courses are usually scheduled in contiguous blocks of time, ideally on 1 or 2 days per week, dedicated to in-depth, hands-on experiences that replicate what it is like to work as an experimentalist. The curriculum could be broad, fast moving, but practical. It might teach the “need to know” basics of materials used for research (including nanotechnology, tracers, dyes), sensors and effectors (e.g., electrophysiology, cameras/ imaging, amplifiers, stimulators), data acquisition systems, electrical/experimental safety, regulation of animal and human experimentation, and ethical/best research practices. Such courses might take on a workshop-like environment, integrating classroom lectures, discussion, and experimental setup with data acquisition, inspection, and basic analysis. Practical issues that come up routinely in lab but rarely in the classroom could be addressed, such as what to do with seemingly “bad data,” the challenge of getting preparations right, signal “clipping,” and background noise. A series of carefully considered, fundamental experiments across the breadth of neuroscience would not only give invaluable experience, but also, if designed properly, expose students to a breadth of research areas that otherwise might not be easily accessible. Finding qualified instructors for these courses, with strong backgrounds in both engineering and neuroscience, may be challenging at first, but as more students are given strong quantitative preparation, the pool of qualified instructors will increase. Certainly, with initial offerings, these courses are likely best taught by collaborating investigators drawn from engineering and neuroscience/biology.

Data Wrangling

Manipulating streams of data, putting them in formats that are required, in places that are accessible, and learning to handle increasingly large, complex data sets are requirements for any neuroscientist. Traditionally, these are skills that are accumulated as “on the job training” when joining a lab, but lack of rigorous preparation in these areas insidiously erodes our research, and to some degree our credibility. One only need look at a representative sample of laboratories and ask: are data and annotations stored rigorously in a central, accessible place and format and archived in such a way that anyone will be able to use them once the acquiring graduate student or postdoc has left the lab? Is computer code rigorously commented, versioned, and stored in a central repository (e.g., GitHub) so that analyses can be reproduced easily? Are data, code, annotations, notes, and experiments stored in such a way that they would stand up to rigorous inspection or inquiry if the work were to be challenged 2 years from now? These are the standards that our data management should be held to. While these standards are easy for any of us to articulate, one suspects that there is room for improvement in these areas in all of our laboratories and that details are often left up to individual graduate students and postdocs, with variable results.
Three basic skills required in this domain are the following: (1) digital signal processing, (2) computer programming, including facility with both a scientific language and some type of “office-suite” software that contains spreadsheet, word processing, and basic database tools, and (3) basic mathematics (preferably through calculus).
Virtually all signals in modern Neuroscience research are electronically acquired and eventually digitized, stored on digital media, and must be filtered, processed, and accessed. This is true not only in electrophysiology and imaging, but also in signals collected in more “wet” and molecular areas as well. It is essential that neuroscientists understand how to sample signals to faithfully represent them and how filtering is necessary but also can distort (e.g., time/phase shift) and introduce errors (e.g., aliasing) into data if not done properly. Our researchers should also have knowledge of how basic methods of transforming digital data, in time, frequency, and other domains (e.g., nonlinear dynamics or wavelets), can be incredibly useful in revealing patterns that are not visible by inspection or with more conventional statistical tools. These skills empower scientists to focus on signals of interest and separate them from unwanted components. Formal training in these areas should likely be required before (preferably) graduate study or made available to students entering graduate programs without this preparation.
Computer programming is now a skill that every neuroscientist must have, and this training is most likely best provided as part of undergraduate preparation for PhD programs. This skill is all too often absent in PhD candidates who are submerged only in the biology of neuroscience, putting them at a tremendous disadvantage during PhD research. The language used will depend upon laboratory standards and the specific work, but most scientists appreciate that once fundamental skills are acquired in a single language, learning others becomes much simpler. Languages that are commonly used in neuroscience research are MATLAB (Mathworks, a commercial —paid subscription required), Python, R, and Java. Each of these languages has its own advantages. MATLAB, for example, has a wealth of “toolboxes” that automate complex processing tasks, saving time, but adding up front expense. Python is an efficient, open-source platform that is rapidly enlarging its user base and toolboxes of its own, though it is a relatively newcomer compared to some commercial packages. C is very efficient for big, computationally intensive jobs, R is often the choice of the statistical community, and Java is very versatile all around, including particular utility in web-based as well as scientific applications. It is important to note that the choice of programming language is a complex discussion on many levels, relating to specific applications, and that many scientists have strong preferences and opinions on this subject.
Along with computer programming, students should learn the basics of data formats, compression, encryption, storage, and transfer. They should learn the basics of how to craft “pipelines,” whereby experimental data are streamed through a series of processing steps that automate the flow from data acquisition to results. As data sets get larger and more complex, basic knowledge of servers, clusters, and cloud computing will become vital to success. Computer storage and processing power on the cloud are becoming more and more economical and efficient, making skills in this domain highly desirable and, before long, essential. Many laboratories are already eschewing the cost and bother of maintaining their own storage and computational clusters (this investigator, for example) to move to using resources from commercial “cloud” providers. In addition, the ability to draw upon virtually unlimited computer power at will, to take on large processing jobs required by “big data,” has tremendous advantages for the modern neuroscientist.
While it may seem that the computer knowledge outlined above is daunting, the basics, for students with the proper math background, could likely be covered in a single-semester, well-designed course for motivated students. Similar to the experimental technology course outlined above, students could be taught computer programming through practical, hands-on exercises using real experimental data. Such a course should include taking data from the point of acquisition, transferring it to appropriate storage, compressing or changing data formats, and doing some type of first-pass visualization to verify data integrity. Such experiments would not yield a complete knowledge base, but a course in data wrangling, perhaps given in sequence with experimental technology above, could give students a strong foundation that can be expanded when they become attached to specific laboratories. In addition, this course lends itself extremely well to interfacing with education in data analysis, described next.

Analysis and Interpretation

Finally, hands-on training in basic tools for visualizing data, statistics, and at least an introduction to more complex analytic methods, such as machine learning, clustering, and independent component analysis, would be extremely useful for graduate students. Lack of experience in these areas is often responsible for errors in publications, difficulty in writing grants and effectively reviewing the experimental literature. While many undergraduate majors require basic statistics training, it is not uniform that graduate programs in neuroscience require proficiency in this area. Tools for data visualization might include some of the basic computer languages mentioned above, for example MATLAB, Excel, Stata, Python, GIMP, Inkscape, Adobe Suite, and Mathematica. Again, a hands-on course looking at carefully chosen data sets taken in the context of specific classic experiments might be the perfect venue for teaching these practical skills. Such a course may be most usefully taught in the form of a working seminar or journal club in which data sets from important experiments are made available and analyzed using widely available routines or toolboxes. This might be taught during the classroom years in PhD programs or perhaps during undergraduate training. In this way, a sequence of three practical courses—experimental technology/ data acquisition, data wrangling, and visualization and interpretation—would dramatically improve technical competence in young neuroscientists. These courses might ideally be taught at the undergraduate level to increase productivity and shorten graduate training. They might also be considered for more creative approaches, such as an intensive “boot camp” given full time in a compressed schedule over the summer months during the first two summers of graduate training. This approach, while ambitious, could be incredibly useful, though it would still require advanced preparation, such as basic requirements in math, computer programming, and introductory statistics, at a minimum, to be advanced enough to be worthwhile.
An added benefit of this type of education is that the skills learned are incredibly useful and marketable, especially for those students who end up not staying in academic neuroscience.

Workshops, Tools, and Open-Source Resources

At the core of any plan to modernize neuroscience education is the need for trainees to be exposed to and taught by experts from a number of different disciplines, including computer scientists, statisticians, and translational/clinical investigators, among others. A great way to get this exposure is through workshops, residential courses, and exposure to shared or open-source tools. Of course there are already resources like these where neuroscience trainees can acquire skills in these disciplines, though they are usually focused on a particular topic. Workshops are available in most of the core areas described above, either through government-sponsored programs through the National Science Foundation, NIH, the Institute of Electrical and Electronics Engineers (IEEE) or through specific centers or laboratories at major academic institutions. Boarding courses over the summer, such as those given at Woods Hole and Cold Spring Harbor, might be an excellent way to efficiently impart strong technological skills to budding neuroscientists, though they would need to be more general and address the topics outlined above. Some of these programs could be sponsored by neuroscience organizations, engineering groups, or specific research centers. Funding might come from government organizations, the private sector, or be funded by dedicated training grants. The European Federation of Neuroscience (FENS) sponsors a variety of hands-on courses and workshops for small groups of students in dedicated locations, as do specific philanthropic and disease-centered foundations to train young investigators and students to work in specific areas. These workshops or compressed hands-on experiences, which could supplement or replace the course sequence described above, are another way of closing potential technical skills gaps in neuroscience trainees. Given that the number of incoming graduate trainees in this area is not prohibitively large, it may be that a large educational body, such as the society for neuroscience, could sponsor a series of geographically disbursed workshops on these technical topics that could bring new students from disparate backgrounds together while giving basic education in the topics outlined above. These types of experiences would be a great way to raise basic competence for neuroscience trainees and to handle specific topics in great depth. They are unlikely to be a substitute for more intensive, guided training through rigorous course work.
Learning about open-source resources and tools available to neuroscience researchers is another way to expand technical capabilities and knowledge in new trainees. Platforms like https://www.ieeg.org, the CRCNS website (https://crcns.org/), Allen Institute (http://alleninstitute.org/), LONI (http://www.loni.usc.edu/), ITK (http://www.itk.org/), PhysioNet (http://www.physionet.org/), and GenBank are examples of such platforms, as well as those associated with the Human Connectome Project. It is expected that this list of resources for sharing data and algorithms and verifying research results will only grow, as the NIH, European Union, and other agencies increasingly require data sharing as a condition of accepting research funding.

Practical Considerations

It is important to note that the above ideas are meant to stimulate introspection and discussion rather than somehow insult or criticize the current state of neuroscience training. The fact that neuroscience research is thriving, growing, and accelerating suggests that our training programs are doing quite a bit right in preparing our trainees for the future. Still, change can be slow to arrive in some labs in the absence of up-to-date training in the newest technology. How many laboratories, for example, might stop purchasing servers, computing clusters, or large banks of hard drives for data storage and analysis if they had postdocs or students who were adept with cloud computing and aware of the economy of scale it can provide at now-plummeting costs? Similarly, how many papers would be of higher quality and impact if our trainees had better training in statistics, data visualization, programming, and analysis? How rapidly could research be accelerated if our students and postdocs were inculcated in an open-source, data-sharing, and open-validation culture? These are ideas that might be considered when critically reading the above thoughts. It is also clear that not every student or institution can do everything and that there may be, by necessity, a need to separate trainees into clear educational tracks, with some specializing in more technologically intensive areas than others. This approach would certainly breathe more life into technologically intensive areas but, in this author’s opinion, would still not obviate the need for broader technical preparation for all students.

Conclusion

The process of innovation in neuroscience has not changed since the 1880s, when Cajal embraced a new technique, augmented it, and changed the world of neuroscience. Our trainees and young investigators are still incredibly talented, innovative, motivated, and hard working. What has changed is that we have accumulated a huge body of knowledge since that time and a detailed understanding that pushes us to look at greater levels of complexity at smaller scales over larger regions and to integrate huge amounts of information linking behavior to the cellular and subcellular levels. This change pervades all areas of neuroscience, from the molecular and genetic to systems, electrophysiology, behavior, modeling, and imaging. Understanding this level of detail requires fundamental technical expertise that wasn’t necessary 20 or even 10 years ago. For this reason, now is a good time to reevaluate how we train our young neuroscientists to prepare them for an even more exciting future.

Acknowledgments

B.L. acknowledges Zachary Ives, PhD, Joost Wagenaar, PhD, Marc Dichter, MD, PhD, and John Dani, PhD for their helpful comments. B.L.’s laboratory is supported by NINDS grant 1U24N S)63930-01A1, CURE, The Mirowsky Family Foundation, the Brain Research Foundation, and the Brain and Behavior Research Foundation.

References

Andres-Barquin, 2002
P.J. Andres-Barquin
Lancet Neurol., 1 (2002), pp. 445-452
Chung and Deisseroth, 2013
K. Chung, K. Deisseroth
Nat. Methods, 10 (2013), pp. 508-513
Collinger et al., 2013
J.L. Collinger, B. Wodlinger, J.E. Downey, W. Wang, E.C. Tyler-Kabara, D.J. Weber, A.J.McMorland, M. Velliste, M.L. Boninger, A.B. Schwartz
Lancet, 381 (2013), pp. 557-564
Ziv et al., 2013
Y. Ziv, L.D. Burns, E.D. Cocker, E.O. Hamel, K.K. Ghosh, L.J. Kitch, A. El Gamal, M.J.Schnitzer
Nat. Neurosci., 16 (2013), pp. 264-266
For further details log on website :
http://www.sciencedirect.com/science/article/pii/S0896627315002536

Development of next-generation wood preservatives

Research members: Makoto Yoshida PhD.
Research fields: Forest and forest products science
Departments: Institute of Agriculture
Keywords: biomass conversion, wood protection, wood rotting fungi, biofuel, cellulose, hemicellulose, lignin

Summary

Wood preservatives are used to protect wood materials from deterioration by wood rotting fungi. Their mechanisms of action are normally based on the toxic compounds for various microorganisms, and thus, the development of next-generation preservatives which shows high specificity to wood decay would be needed from the viewpoint of environmental impact and public health.

     Recently, we found a novel cellulose-binding pyranose dehydrogenase (PDH), and showed that PDH required pyrroloquinoline quinone (PQQ) as a cofactor. Since PQQ is well known to be produced only by limeted bacterial species, the transfer of PQQ from bacteria to fungi would be important to triger the catalytic reaction of this type of enzymes. In the present study, we have attempted to develop wood preservatives which shows high specificity to wood decay phenomena based on block of PQQ-transfer pathway from bacteria to fungi. The research is financially supported by JSPS KAKENHI Grant-in-Aid for Scientific Research (B) [Grant no.15H04526].


Reference articles and patents

1) Kiwamu Umezawa, Kouta Takeda, Takuya Ishida, Naoki Sunagawa, Akiko Makabe, Kazuo Isobe, Keisuke Koba, Hiroyuki Ohno, Masahiro Samejima, Nobuhumi Nakamura, Kiyohiko Igarashi, Makoto Yoshida*. A novel pyrroloquinoline quinone-dependent 2-keto-D-glucose dehydrogenase from Pseudomonas aureofaciens. Journal of Bacteriology 197, 1322-1329 (2015)
2) Kouta Takeda, Hirotoshi Matsumura, Takuya Ishida, Masahiro Samejima, Hiroyuki Ohno, Makoto Yoshida*, Kiyohiko Igarashi, and Nobuhumi Nakamura. Characterization of a Novel PQQ-Dependent Quinohemoprotein Pyranose Dehydrogenase from Coprinopsis cinerea Classified into Auxiliary Activities Family 12 in Carbohydrate-Active Enzymes. PLOS ONE 10, e0115722 (2015)
3) Hirotoshi Matsumura, Kiwamu Umezawa, Kouta Takeda, Naohisa Sugimoto, Takuya Ishida, Masahiro Samejima, Hiroyuki Ohno, Makoto Yoshida*, Kiyohiko Igarashi, Nobuhumi Nakamura. Discovery of a eukaryotic pyrroloquinoline quinone-dependent oxidoreductase belonging to a new auxiliary activity family in the database of carbohydrate-active enzymes. PLOS ONE 9, e104851 (2014)

Contact

University Research Administration Center(URAC),
Tokyo University of Agriculture andTechnology
urac[at]ml.tuat.ac.jp
(Please replace [at] with @.)

For further details log on website:
http://www.rd.tuat.ac.jp/en/activities/factors/search/20150730_3.html

Next Generation Wood: 5 Furnishings Take Tradition to the Next Level

Author
By Ali Morris

Wood is an endlessly versatile material that continues to be reinvented and reimagined by new generations of designers. The past year has seen an explosion of wood furniture with adventurous surface finishes that bring unexpected color and texture to this age-old material. Traditional techniques are being revisited while new technology is fuelling the creation of surprising new forms and lightweight hybrids.
1. Jo Nagasaka of Schemata Architects has used a traditional Japanese wood treatment called udukuri to create ColoRing; a collection of neon-stained wood furniture. The wood is polished with a brush made of sew grass that scrapes off the soft tissue and exposes the natural texture of the grain. Layers of leftover paint in clashing colors are then used to stain the pieces, before the timber is polished flat leaving behind dazzling remnants of the paint within the wood grain.
2. Made from Douglas fir, the Diptych furniture series by Dutch designer Lex Pott features geometric cut-out shapes that are made using an inventive sandblasting process. Working with online platform and design label New Window, Pott has devised a way in which to sandblast away the wood’s soft summer rings while leaving behind a see-through framework of winter rings. By covering parts of the wood with rubber stickers during the sandblasting process, Pott can create geometric patterns with solid and semi-transparent wood. “You can see the life of the tree in the wood,” explains Pott. “Good summers give a wide annual ring, harsh winters a thin one."
3. Japanese design studio Nendo printed wood grain patterns onto the surface of their wooden Print chairs to create an intriguing layered effect. New printing technology allowed the studio to make fine adjustments to the scale, density and colors used. “For some seats we layered two different wood grain patterns, and for others printed enlarged, abstracted wood grain patterns onto the existing pattern,” says the studio. “For another design, we scanned the wood’s surface then printed the same pattern back onto the wood at another angle. We also experimented with other materials, replacing the seat base with OSB laminate board for one chair and printing a marble pattern onto the wood for another.”
4. Low-cost and versatile, plywood is enjoying a resurgence in popularity. Tacoma, Washington-based craftsman Steve Lawler of Reply Furniture collects scrap plywood to create intricate contemporary furniture. The pieces of collected wood are meticulously worked to create tables, chairs, picture frames and cabinets with beautiful plywood that shows off the natural grain of the wood.
5. Measuring 7.9 feet long and made from a lightweight corrugated plywood material from Canada called Corelam, Benjamin Hubert’s ultra-light Ripple table weighs just 10.5 kilograms yet it can support the weight of a person. The tabletop is made up of three layers of 0.8 millimeter-thick Sitka spruce plywood corrugated together and then topped with a flat sheet of plywood. A curve across the underside of the tabletop provides extra tensile strength, while the legs are made with a sturdy hollow triangular profile. In total the design uses 80% less material than a standard timber table.
For further information log on website :
http://www.interiordesign.net/articles/8533-next-generation-wood-5-furnishings-take-tradition-/

Advantages and Disadvantages of Fasting for Runners

Author BY   ANDREA CESPEDES  Food is fuel, especially for serious runners who need a lot of energy. It may seem counterintuiti...