This is the News section - please choose from one of the News articles below.
New BESA Research Version 7.0
Latest version released
BESA Research version 7.0 has now officially been released!
Order the latest version of BESA Research now available online on our webshop!
BESA Research 7.0 Features & Highlights
Data review and pre-processing:
- Simultaneous EEG-fMRI data review: Correction of fMRI artifacts is now possible directly in the BESA Research review window. One of three proven methods can be selected, and the effects on the correction matrix directly visualized.
- Use this feature for further analysis: Read your fMRI data directly into BESA Research’s Source Analysis window, seed sources from fMRI and directly see activation patterns on a millisecond scale!
- A new ICA method (SOBI) was introduced.
- New readers for the following data formats are available:
- EEG: Neuroscan Curry version 7
- EEG: Neuralynx
- MEG: RICOH
- Semi-Analytic Monte-Carlo Estimation (SESAME) of sources is a new automated localization method using Bayesian statistics that finds the most likely solution and displays the likelihood distribution as a volume image.
- Time-domain beamforming was introduced using either multiple or single sources, which can be applied as vector or scalar beamformer; state-of-the-art options for calculating spatial filters and weights are available.
- Use this feature to reconstruct source waveforms, and seed virtual sensors from the results with few clicks. Virtual sensor montages can be applied to raw data in further analyses.
- Brain atlases are available. Several state-of-the-art atlases can be selected, and displayed in user-defined overlay modes. It is possible to combine brain atlas images with other source imaging results.
Brain connectivity / source coherence:
- The new BESA Connectivity program can be started directly from the Coherence dialog, for enhanced methods in time-frequency decomposition and brain connectivity analysis (release expected in the next three weeks):
- wavelets and / or complex demodulation
- connectivity analysis in source space or sensor space
- latest connectivity methods including Granger Causality, Partial Directed Coherence, Directed Transfer Function, and more
- visualize data in clear 2D and 3D result plots and create publication images or videos
- Single events or any data block can now be plotted in a time-frequency plot.
New Hyper-Sampling ActiveTwo
With a Hyper-Sampling Rate of 262,144 Hz!
We're proud to announce the latest and only Hyper-Sampling EEG System with active electrodes! Equipped with 8 EX (external) inputs on the top and newest SAR (Successive-Approximation-Register) ADC technology this active EEG System can sample with an incredibly fast sampling rate of up to 262,144 Hz!
Its main application is EEG/ABR on subjects with cochlear implants, as cochlear implants generate vast amounts of high frequency interference covering the EEG. When the high frequency interference is sampled accurately together with the EEG, the interference can be calculate out of the signal and the clean EEG will be recovered. Currently, the high frequency interference is smeared out to lower frequencies in the EEG bandwidth, and EEG cannot be recovered. No other EEG amplifier can measure such a signal with this accuracy (fast slew rate, minimal ringing)!
Check out more information and technical specifications on the newest hyper-sampling ActiveTwo and contact us to receive a quote today!
First Dry & Active EEG System with Integrated VR Headset
We're proud to announce the release of the first ever EEG System with Dry and Active EEG electrodes with integrated VR Headset! The DSI-VR300 from Wearable Sensing is a research grade EEG system specifically designed for P300, BCI and Neuropsychology research projects where the integration of a VR headset is required.
Contact us today and request further information or a quote!
This is the Event section - please choose from one of the Event articles below.
Medica 2018, Dusseldorf - Germany
12 - 15 November, 2018 | Düsseldorf, Germany
NEUROSPEC AG will participate at MEDICA 2018 in Dusseldorf, Germany! The conference will be held from 12 - 15 November, 2018. Marc and Maximilian Mosimann will mostly be found in Hall 9. You can always contact us during visiting hours via email to firstname.lastname@example.org or via the Contact page. See you there and looking forward to it!
28th ASP Conference (2018), Geelong, Victoria - Australia
19 - 21 November, 2018 | Geelong, Victoria, Australia
The 28th Annual meeting of the Australasian Society for Psychophysiology will this year be hosted by Deakin University and held at Geelong Waterfront Campus in Geelong, Victoria from Monday, 19 November to Wednesday, 21 November 2018.
We are happy to announce our official sponsorship for the ASP Conference 2018 and wish all participants a splendid conference! We're even more excited to announce that this will be our first time attending the conference and we'll even have a booth. Get in touch with us if you wish to see our latest products, discuss any of your projects or simply meet up for a coffee!
8th ACNS (Australasian Cognitive Neuroscience Conference), Melbourne - Australia
22 - 25 November, 2018 | Melbourne, Australia
Join us for the 8th Australasian Cognitive Neuroscience Society Conference (ACNS)! The ACNS conference will be held from the 22 - 25 November, 2018 at the University of Melbourne at the School of Psychological Sciences, Australia.
We welcome members from disciplines such as
- Attention, Sensation, and Perception
- Cognition and Decision-Making
- Emotional and Social Processes
- Language, Learning, and Memory
- Motor Processes
- Ageing and Developmental Cognitive Neuroscience
- and generally all with an interest in the relationships between the brain, mind, and behaviour.
As each year various workshops will be organized during the conference where you'll also get the opportunity to exchange knowledge and experience with other researchers.
Make sure to register for the conference via the ACNS 2018 Registration page today!
Using Facial Expressions to Understand Behaviour - Register Now!
Thursday September 27, 17:00-18:00 CEST
Researchers often need to explain complex behaviors tied to emotional response. Collecting good emotional data and translating that data into a conclusion is critical when understanding and explaining behavior.
One way to collect and interpret facial expressions data is through FaceReader Software. FaceReader is an emotion reading software that analyzes facial movements to classify subject response. Facial expressions can be classified as happy, sad, scared, disgusted, surprised, angry, contempt, and neutral.
Join Frazer Findlay, CEO of BIOPAC, to learn the differences between Facial Coding using FaceReader and Facial EMG. Frazer will explain the distinctions of facial coding and why you would use a software like FaceReader versus collecting fEMG data using the facial muscles.
What You Will Learn
- Difference between Facial Coding and Facial EMG plus advantages and disadvantages of each
- Methodology for collecting data
- How to bring data into AcqKnowledge real-time and seamlessly
- Video Synchronization
Plus, see a live demo of the software needed to collect facial expression data.