This is not the production SciCrunch server
X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

Search Again

We support boolean queries, use +,-,<,>,~,* to alter the weighting of terms

Showing 20 out of 997 Resources on page 10

OpenElectrophy

Software Python module for electrophysiology data analysis.

  • Resource
  • SciCrunch
  • 13 years ago - by Anonymous

Mexican Health and Aging Study

A dataset of a prospective panel study of health and aging in Mexico. The study was designed to ensure comparability with the U.S. Health and Retirement Study in many domains, and the NHANES III. The baseline survey in 2001 is nationally representative of the 13 million Mexicans born prior to 1951. The six Mexican states which are home to 40% of all migrants to the U.S. were over-sampled at a rate of 1.7:1. Spouse/partners of eligible respondents were interviewed also, even if the spouse was born after 1950. Completed interviews were obtained in 9,862 households, for a total of 15,186 individual interviews. All interviews were face-to-face, with average duration of 82 minutes. A direct interview (on the Basic questionnaire) was sought, and Proxy interviews were obtained when poor health or temporary absence precluded a direct interview. Questionnaire topics included the following: * HEALTH MEASURES: self-reports of conditions, symptoms, functional status, hygienic behaviors (e.g., smoking & drinking history), use/source/costs of health care services, depression, pain, reading and cognitive performance; * BACKGROUND: Childhood health and living conditions, education, ability to read/write and count, migration history, marital history; * FAMILY: rosters of all children (including deceased children); for each, demographic attributes, summary indicators of childhood and current health, education, current work status, migration. Parent and sibling migration experiences; * TRANSFERS: financial and time help given to and received by respondent from children, indexed to specific child; time and financial help to parent; * ECONOMIC: sources and amounts of income, including wages, pensions, and government subsidies; type and value of assets. All amount variables are bracketed in case of non-response. * HOUSING ENVIRONMENT: type, location, building materials, other indicators of quality, and ownership of consumer durables; * ANTHROPOMETRIC: for a 20% sub-sample, measured weight, height; waist, hip, and calf circumference; knee height, and timed one-leg stands. Current plans are to conduct another two follow-up surveys in 2012 and 2014 and will field the 3rd and 4th waves of survey data collection in Mexico. For the 2012 wave, interviews will be sought for: every person who was part of the panel in 2003 and their new spouse / partner, if applicable, and a new sample of persons born between 1952 and 1962. For the 2014 wave, we will follow-up the whole sample from 2012. Interviews will be conducted person-to-person. Direct interviews will be sought with all informants, but proxy interviews are allowed for those unable to complete their own interview for health or cognitive reasons. A next-of-kin interview will be completed with a knowledgeable respondent for those who were part of the panel but have died since the last interview. A sub-sample will be selected to obtain objective markers such as blood sample and anthropometric measures. Data Availability: The 2001 baseline data, 2003 follow-up data, and documentation can be downloaded. * Dates of Study: 2001-2003 * Study Features: Longitudinal, International, Anthropometric Measures * Sample Size: 2001: 15,186 (Baseline) Link: * ICPSR: http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/00142

  • Resource
  • SciCrunch
  • 13 years ago - by Anonymous

Longitudinal Employer-Household Dynamics

A dataset that combines federal and state administrative data on employers and employees with core Census Bureau censuses and surveys, while protecting the confidentiality of people and firms that provide the data. This data infrastructure facilitates longitudinal research applications in both the household / individual and firm / establishment dimensions. The specific research is targeted at filling an important gap in the available data on older workers by providing information on the demand side of the labor market. These datasets comprise Title 13 protected data from the Current Population Surveys, Surveys of Income and Program Participation, Surveys of Program Dynamics, American Community Surveys, the Business Register, and Economic Censuses and Surveys. With few exceptions, states have partnered with the Census Bureau to share data. As of December 2008, Connecticut, Massachusetts, New Hampshire and Puerto Rico have not signed a partnership agreement, while a partnership with the Virgin Islands is pending. LEHD's second method of developing employer-employee data relations through the use of federal tax data has been completed. LEHD has produced summary tables on accessions, separation, job creation, destruction and earnings by age and sex of worker by industry and geographic area. The data files consist of longitudinal datasets on all firms in each participating state (quarterly data, 1991- 2003), with information on age, sex, turnover, and skill level of the workforce as well as standard information on employment, payroll, sales and location. These data can be accessed for all available states from the Project Website. Data Availability: Research conducted on the LEHD data and other products developed under this proposal at the Census Bureau takes place under a set of rules and limitations that are considerably more constraining than those prevailing in typical research environments. If state data are requested, the successful peer-reviewed proposals must also be approved by the participating state. If federal tax data are requested, the successful peer-reviewed proposals must also be approved by the Internal Revenue Service. Researchers using the LEHD data will be required to obtain Special Sworn Status from the Census Bureau and be subject to the same legal penalties as regular Census Bureau employees for disclosure of confidential information. Basic instructions on how to download the data files and restrictions can be found on the Project Website. * Dates of Study: 1991-present * Study Features: Longitudinal * Sample Size: 48 States or U.S. territories

  • Resource
  • SciCrunch
  • 13 years ago - by Anonymous

ZCre

ZCre is a consortium of researchers who have a shared interest in developing Cre/lox based tools for use in the zebrafish model organism. ZCre plans to generate 15 or more tissue specific ERT2CreERT2 driver lines to be expressed in either differentiated cell types or precursor/stem cells, as well as 20 or more lines based upon multilox technology. One set of multilox transgenes will allow long-term permanent labelling of individual cells for lineage tracing and other applications. Another set will allow perturbation of single pathways within individual cells (PathM lines). In order to make these lines ZCre has developed a three-way cloning system using Gateway technology (Invitrogen). Once constructs are made they will be deposited at Addgene.org. Transgenic lines will be available from ZCre or from regional stock centers as requested.

  • Resource
  • SciCrunch
  • 14 years ago - by Anonymous

NBDC - National Bioscience Database Center

The National Bioscience Database Center (NBDC) intends to integrate all databases for life sciences in Japan, by linking each database with expediency to maximize convenience and make the entire system more user-friendly. We aim to focus our attention on the needs of the users of these databases who have all too often been neglected in the past, rather than the needs of the people tasked with the creation of databases. It is important to note that we will continue to honor the independent integrity of each database that will contribute to our endeavor, as we are fully aware that each database was originally crafted for specific purposes and divergent goals. Services: * Database Catalog - A catalog of life science related databases constructed in Japan that are also available in English. Information such as URL, status of the database site (active vs. inactive), database provider, type of data and subjects of the study are contained for each database record. * Life Science Database Cross Search - A service for simultaneous searching across scattered life-science databases, ranging from molecular data to patents and literature. * Life Science Database Archive - maintains and stores the datasets generated by life scientists in Japan in a long-term and stable state as national public goods. The Archive makes it easier for many people to search datasets by metadata in a unified format, and to access and download the datasets with clear terms of use. * Taxonomy Icon - A collection of icons (illustrations) of biological species that is free to use and distribute. There are more than 200 icons of various species including Bacteria, Fungi, Protista, Plantae and Animalia. * GenLibi (Gene Linker to bibliography) - an integrated database of human, mouse and rat genes that includes automatically integrated gene, protein, polymorphism, pathway, phenotype, ortholog/protein sequence information, and manually curated gene function and gene-related or co-occurred Disease/Phenotype and bibliography information. * Allie - A search service for abbreviations and long forms utilized in life sciences. It provides a solution to the issue that many abbreviations are used in the literature, and polysemous or synonymous abbreviations appear frequently, making it difficult to read and understand scientific papers that are not relevant to the reader's expertise. * inMeXes - A search service for English expressions (multiple words) that appear no less than 10 times in PubMed/MEDLINE titles or abstracts. In addition, you can easily access the sentences where the expression was used or other related information by clicking one of the search results. * HOWDY - (Human Organized Whole genome Database) is a database system for retrieving human genome information from 14 public databases by using official symbols and aliases. The information is daily updated by extracting data automatically from the genetic databases and shown with all data having the identifiers in common and linking to one another. * MDeR (the MetaData Element Repository in life sciences) - a web-based tool designed to let you search, compare and view Data Elements. MDeR is based on the ISO/IEC 11179 Part3 (Registry metamodel and basic attributes). * Human Genome Variation Database - A database for accumulating all kinds of human genome variations detected by various experimental techniques. * MEDALS - A portal site that provides information about databases, analysis tools, and the relevant projects, that were conducted with the financial support from the Ministry of Economy, Trade and Industry of Japan.

  • Resource
  • SciCrunch
  • 14 years ago - by Anonymous

Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE)

Data set from a randomized controlled trial of cognitive interventions designed to maintain functional independence in elders by improving basic mental abilities. Several features made ACTIVE unique in the field of cognitive interventions: (a) use of a multi-site, randomized, controlled, single-blind design; (b) intervention on a large, diverse sample; (c) use of common multi-site intervention protocols, (d) primary outcomes focused on long-term, cognitively demanding functioning as measured by performance-based tests of daily activities; and (e) an intent-to-treat analytical approach. The clinical trial ended with the second annual post-test in January 2002. A third annual post-test was completed in December 2003. The area population and recruitment strategies at the six field sites provided a study sample varying in racial, ethnic, gender, socioeconomic, and cognitive characteristics. At baseline, data were collected by telephone for eligibility screening, followed by three in-person assessment sessions, including two individual sessions and one group session, and a self-administered questionnaire. At post-tests, data were collected in-person in one individual session and one group session as well as by self-administered questionnaire. There were four major categories of measures: proximal outcomes (measures of cognitive abilities that were direct targets of training), primary outcomes (measures of everyday functioning, both self-report and performance), secondary outcomes (measures of health, mobility, quality of life, and service utilization), and covariates (chronic disease, physical characteristics, depressive symptoms, cognitive impairment, psychosocial variables, and demographics). Phase I of ACTIVE was a randomized controlled, single-blind trial utilizing a four-group design, including three treatment arms and a no-contact control group. Each treatment arm consisted of a 10-session intervention for one of three cognitive abilities memory, reasoning, and speed of processing. Testers were blind to participant treatment assignment. The design allowed for testing of both social contact effects (via the contact control group) and retest effects (via the no-contact control group) on outcomes. Booster training was provided in each treatment arm to a 60% random subsample prior to first annual post-test. Phase II of ACTIVE started in July, 2003 as a follow-up study focused on measuring the long-term impact of training effects on cognitive function and cognitively demanding everyday activities. The follow-up consisted of one assessment to include the Phase I post-test battery. This was completed in late 2004.

  • Resource
  • SciCrunch
  • 14 years ago - by Anonymous

DIAN - Dominantly Inherited Alzheimer Network

THIS RESOURCE IS NO LONGER IN SERVICE. Documented on September 23,2022. An international research partnership of leading scientists determined to understand a rare form of Alzheimers disease that is caused by a gene mutation and to establish a research database and tissue repository to support research on Alzheimers disease by other investigators around the world. One goal of DIAN is to study possible brain changes that occur before Alzheimers disease is expressed in people who carry an Alzheimers disease mutation. Other family members without a mutation will serve as a comparison group. People in families in which a mutation has been identified will be tracked in order to detect physical or mental changes that might distinguish people who inherited the mutation from those who did not. DIAN currently involves eleven outstanding research institutions in the United States, United Kingdom, and Australia. John C. Morris, M.D., Friedman Distinguished Professor of Neurology at Washington University School of Medicine in St. Louis, is the principal investigator of the project.

  • Resource
  • SciCrunch
  • 14 years ago - by Anonymous

Google Project Hosting

Project Hosting on Google Code provides a free collaborative development environment for open source projects.

  • Resource
  • SciCrunch
  • 14 years ago - by Anonymous

EMBL - Bork Group

The main focus of this Computational Biology group is to predict function and to gain insights into evolution by comparative analysis of complex molecular data. The group currently works on three different scales: * genes and proteins, * protein networks and cellular processes, and * phenotypes and environments. They require both tool development and applications. Some selected projects include comparative gene, genome and metagenome analysis, mapping interactions to proteins and pathways as well as the study of temporal and spatial protein network aspects. All are geared towards the bridging of genotype and phenotype through a better understanding of molecular and cellular processes. The services - resources & tools, developed by Bork Group, are mainly designed and maintained for research & academic purposes. Most of services are published and documented in one or more papers. All our tools can be completely customized and integrated into your existing framework. This service is provided by the company biobyte solutions GmbH. Please visit their tools and services pages for full details and more information. Standard commercial licenses for our tools are also available through biobyte solutions GmbH. The group is partially associated with Max Delbr��ck Center for Molecular Medicine (MDC), Berlin.

  • Resource
  • SciCrunch
  • 14 years ago - by Anonymous

House of Mind

Neuroscience/psych blog by a neuroscientist in training. I mostly review articles and try to synthesize what I deem important/interesting. Enjoy!

  • Resource
  • SciCrunch
  • 14 years ago - by Anonymous

Open Biosystems

Open Biosystems offers products that span Genomics, RNAi and Antibodies. Building on the rapid sharing model that is at the core of the Human Genome Project, Open Biosystems collaborates with some of the most innovative life science investigators working today. We partner with them to bring to market new productsthey have often pioneered the new resources in their own lab, and we prepare it for widespread use and then provide access to the research community. Delivery of genetic content is our most recent technological breakthrough. Recently, we brought to market the Tranz-vector system, the safest human-based lentiviral delivery technology. Further supplementing our already strong line of RNA interference (RNAi) and complementary DNA (cDNA) products, this technology provides investigators with superior delivery capabilities for high-quality cellular screening. The combination or our unique Tranz-vector system and whole genome RNAi and cDNA content enables our customers to perform drug target validation on a large scale. With our genomics resources, Open Biosystems provides the content investigators utilize to unlock the functions of human genes and their relationships to normal and disease development. We offer the most complete gene library in the industry. This novel library consists of several full length cDNA and open reading frame collections. Most prominently among these is the Mammalian Gene Collection (MGC), the industry's gold standard gene catalog. The discovery of RNA interference has revolutionized the way investigators approach the studies of gene expression, regulation and interactions, particularly as it relates to drug development. Our collaboration with Drs. Greg Hannon (CSHL) and Steve Elledge (Harvard) has led the way in the evolution of the short hairpin RNA (shRNA) technologies to provide the life science community with whole genome resources for human, mouse and rat with a multitude of technology and delivery advantages.

  • Resource
  • SciCrunch
  • 14 years ago - by Anonymous

Yandell Lab Portal

Sequenced genomes contain a treasure trove of information about how genes function and evolve. Getting at this information, however, is challenging and requires novel approaches that combine computer science and experimental molecular biology. My lab works at the intersection of both domains, and research in our group can be summarized as follows: generate hypotheses concerning gene function and evolution by computational means, and then test these hypotheses at the bench. This is easier said than done, as serious barriers still exist to using sequenced genomes and their annotations as starting points for experimental work. Some of these barriers lie in the computational domain, others in the experimental. Though challenging, overcoming these barriers offers exciting training opportunities in both computer science and molecular genetics, especially for those seeking a future at the intersection of both fields. Ongoing projects in the lab are centered on genome annotation and comparative genomics; exploring the relationships between sequence variation and human disease; and high-throughput biological image analysis. Current software tools available: VAAST (the Variant Annotation, Analysis & Search Tool) is a probabilistic search tool for identifying damaged genes and their disease-causing variants in personal genome sequences. VAAST builds upon existing amino acid substitution (AAS) and aggregative approaches to variant prioritization, combining elements of both into a single unified likelihood-framework that allows users to identify damaged genes and deleterious variants with greater accuracy, and in an easy-to-use fashion. VAAST can score both coding and non-coding variants, evaluating the cumulative impact of both types of variants simultaneously. VAAST can identify rare variants causing rare genetic diseases, and it can also use both rare and common variants to identify genes responsible for common diseases. VAAST thus has a much greater scope of use than any existing methodology. MAKER 2 (updated 01-16-2012) MAKER is a portable and easily configurable genome annotation pipeline. It's purpose is to allow smaller eukaryotic and prokaryotic genomeprojects to independently annotate their genomes and to create genome databases. MAKER identifies repeats, aligns ESTs and proteins to a genome, produces ab-initio gene predictions and automatically synthesizes these data into gene annotations having evidence-based quality values. MAKER is also easily trainable: outputs of preliminary runs can be used to automatically retrain its gene prediction algorithm, producing higher quality gene-models on seusequent runs. MAKER's inputs are minimal and its ouputs can be directly loaded into a GMOD database. They can also be viewed in the Apollo genome browser; this feature of MAKER provides an easy means to annotate, view and edit individual contigs and BACs without the overhead of a database. MAKER should prove especially useful for emerging model organism projects with minimal bioinformatics expertise and computer resources. RepeatRunner RepeatRunner is a CGL-based program that integrates RepeatMasker with BLASTX to provide a comprehensive means of identifying repetitive elements. Because RepeatMasker identifies repeats by means of similarity to a nucleotide library of known repeats, it often fails to identify highly divergent repeats and divergent portions of repeats, especially near repeat edges. To remedy this problem, RepeatRunner uses BLASTX to search a database of repeat encoded proteins (reverse transcriptases, gag, env, etc...). Because protein homologies can be detected across larger phylogenetic distances than nucleotide similarities, this BLASTX search allows RepeatRunner to identify divergent protein coding portions of retro-elements and retro-viruses not detected by RepeatMasker. RepeatRunner merges its BLASTX and RepeatMasker results to produce a single, comprehensive XML-based output. It also masks the input sequence appropriately. In practice RepeatRunner has been shown to greatly improve the efficacy of repeat identifcation. RepeatRunner can also be used in conjunction with PILER-DF - a program designed to identify novel repeats - and RepeatMasker to produce a comprehensive system for repeat identification, characterization, and masking in the newly sequenced genomes. CGL CGL is a software library designed to facilitate the use of genome annotations as substrates for computation and experimentation; we call it CGL, an acronym for Comparitive Genomics Library, and pronounce it Seagull. The purpose of CGL is to provide an informatics infrastructure for a laboratory, department, or research institute engaged in the large-scale analysis of genomes and their annotations.

  • Resource
  • SciCrunch
  • 14 years ago - by Anonymous

American Federation for Aging Research

A non-profit organization that supports the advance of healthy aging through biomedical research.

  • Resource
  • SciCrunch
  • 14 years ago - by Anonymous

Mind Research Network - COINS

A web-based neuroimaging and neuropsychology software suite that offers versatile, automatable data upload/import/entry options, rapid and secure sharing of data among PIs, querying and export all data, real-time reporting, and HIPAA and IRB compliant study-management tools suitable to large institutions as well as smaller scale neuroscience and neuropsychology researchers. COINS manages over over 400 studies, more than 265,000 clinical neuropsychological assessments, and 26,000 MRI, EEG, and MEG scan sessions collected from 18,000 participants at over ten institutions on topics related to the brain and behavior. As neuroimaging research continues to grow, dynamic neuroinformatics systems are necessary to store, retrieve, mine and share the massive amounts of data. The Collaborative Informatics and Neuroimaging Suite (COINS) has been created to facilitate communication and cultivate a data community. This tool suite offers versatile data upload/import/entry options, rapid and secure sharing of data among PIs, querying of data types and assessments, real-time reporting, and study-management tools suitable to large institutions as well as smaller scale researchers. It manages studies and their data at the Mind Research Network, the Nathan Kline Institute, University of Colorado Boulder, the Olin Neuropsychiatry Research Center (at) Hartford Hospital, and others. COINS is dynamic and evolves as the neuroimaging field grows. COINS consists of the following collaboration-centric tools: * Subject and Study Management: MICIS (Medical Imaging Computer Information System) is a centralized PostgreSQL-based web application that implements best practices for participant enrollment and management. Research site administrators can easily create and manage studies, as well as generate reports useful for reporting to funding agencies. * Scan Data Collection: An automated DICOM receiver collects, archives, and imports imaging data into the file system and COINS, requiring no user intervention. The database also offers scan annotation and behavioral data management, radiology review event reports, and scan time billing. * Assessment Data Collection: Clinical data gathered from interviews, questionnaires, and neuropsychological tests are entered into COINS through the web application called Assessment Manager (ASMT). ASMT's intuitive design allows users to start data collection with little or no training. ASMT offers several options for data collection/entry: dual data entry, for paper assessments, the Participant Portal, an online tool that allows subjects to fill out questionnaires, and Tablet entry, an offline data entry tool. * Data Sharing: De-identified neuroimaging datasets with associated clinical-data, cognitive-data, and associated meta-data are available through the COINS Data Exchange tool. The Data Exchange is an interface that allows investigators to request and share data. It also tracks data requests and keeps an inventory of data that has already been shared between users. Once requests for data have been approved, investigators can download the data directly from COINS.

  • Resource
  • SciCrunch
  • 14 years ago - by Anonymous

ResearchRaven

A database of funding opportunities, professional conferences, calls for papers and other research-related materials. ResearchRaven is a public service provided by the Samaritan Health Services Center for Health Research and Quality.

  • Resource
  • SciCrunch
  • 15 years ago - by Anonymous

CPCTR: Cooperative Prostate Cancer Tissue Resource

THIS RESOURCE IS NO LONGER IN SERVICE. Doumented on September 23,2022. The National Cancer Institute initially established the Cooperative Prostate Cancer Tissue Resource (CPCTR) to provide prostate cancer tissue samples with clinical annotation to researchers. The Resource provides access to formalin-fixed, paraffin-embedded primary prostate cancer tissue with associated clinical and follow-up data for research studies, particularly studies focused on translating basic research findings into clinical application. Fresh-frozen tissue is also available with limited clinical follow up information since these are more recent cases. The Resource database contains pathologic and clinical information linked to a large collection of prostate tissue specimens that is available for research. Researchers can determine whether the Resource has the tissues and patient data they need for their individual research studies. Consultation and interpretive services: Assistance is available from trained CPCTR pathologists. The CPCTR can provide consultative assistance in staining interpretation, and scoring, on a collaborative basis. Fresh Frozen and Paraffin Tissue: The resource has over 7,000 annotated cases (including 7,635 specimens and 38,399 annotated blocks). Tissue Microarrays (TMA): The CPCTR has slides from prostate cancer TMAs with associated clinical data. The information provided for each case on the arrays (derived from radical prostatectomy specimens) includes: age at diagnosis, race, PSA at diagnosis, tumor size, TNM stage, Gleason score and grade, and vital status and other variables.

  • Resource
  • SciCrunch
  • 15 years ago - by Anonymous

Online Education for the International Research Community: AboutIntroduction to Clinical Drug and Substance Abuse Research Methods

THIS RESOURCE IS NO LONGER IN SERVICE, documented on November 07, 2012. Decemeber 15, 2011 - Thank you for your interest in DrugAbuseResearchTraining.org. The site, courses, and resources are no longer available. Please send an email to inquiry (at) md-inc.com if you would like to be notified if the site or courses become available again. Introduction to Clinical Drug and Substance Abuse Research Methods is an online training program intended to introduce clinicians and substance abuse professionals to basic clinical research methods. The program is divided into four modules. Each module covers an entire topic and includes self-assessment questions, references, and online resources: * The Neurobiology of Drug Addiction * Biostatistics for Drug and Substance Abuse Research * Evaluating Drug and Substance Abuse Programs * Designing and Managing Drug and Substance Abuse Clinical Trials The learning objectives of this program are to help you: * Evaluate the benefits of alternative investigative approaches for answering important questions in drug abuse evaluation and treatment. * Define the proper levels of measurement and appropriate statistical methods for a clinical study. * Address common problems in data collection and analysis. * Anticipate key human subjects and ethical issues that arise in drug abuse studies. * Interpret findings from the drug abuse research literature and prepare a clinical research proposal. * Prepare research findings for internal distribution or publication in the peer reviewed literature. * Recognize drug addiction as a cyclical, chronic disease. * Understand and describe the brain circuits that are affected by addicting drugs, and explain to others the effects of major classes of addicting drugs on brain neurotransmitters. * Utilize new pharmacologic treatments to manage persons with drug addiction. Physicians can earn AMA PRA Category 1 Credit and purchase a high resolution printable electronic CME certificate(view sample); non-physicians can purchase high resolution printable electronic certificate of course participation that references AMA PRA Category 1 credit (view sample). This program does not offer printed certificates.

  • Resource
  • SciCrunch
  • 16 years ago - by Anonymous

aTag Generator

THIS RESOURCE IS NO LONGER IN SERVICE, documented on August 13, 2012. Snippets of HTML that capture the information that is most important in a machine-readable, interlinked format, making it easier to see the big picture. aTags work with any Web text and can store and connect any textual element that is highlighted in a browser. The structure of the embedded RDF/OWL is decidedly simple: a very short piece of human-readable text that is "tagged" with relevant ontological entities. An aTag generator can be easily added to any web browser and allows researchers to quickly generate aTags out of key statements from web pages, such as PubMed abstracts. The resulting aTags can be embedded anywhere on the web, for example on blogs, wikis, or biomedical databases. aTag demonstrates how the resulting statements that are distributed over the web can be searched, visualized and aggregated with Semantic Web / Linked Data tools, and discusses how aTags can be used to answer practically relevant biomedical questions even though their structure is very simple. aTags are based on Semantic Web standards and Linked Data practices. Specifically, they make use of RDFa, the SIOC vocabulary and various domain ontologies and taxonomies that are available in RDF/OWL format. The autocomplete functionality is based on Apache Solr. Reference: Simple, ontology-based representation of biomedical statements through fine-granular entity tagging and new web standards Matthias Samwald and Holger Stenzhorn. Bio-Ontologies 2009.

  • Resource
  • SciCrunch
  • 16 years ago - by Anonymous

Ensembl Metazoa

Ensembl Genomes project produces genome databases for important species from across taxonomic range, using Ensembl software system. Five sites are now available, one of which is Ensembl Metazoa, which houses metazoan species.

  • Resource
  • SciCrunch
  • 16 years ago - by Anonymous

Sectional Atlas of Human Brain and Spinal Cord

Sectional atlas featuring sections of the spinal cord and brain for a neuroanatomy course offered by Temple University. Labels may be turned on and off.

  • Resource
  • SciCrunch
  • 16 years ago - by Anonymous