Peter Vangheluwe
Institution: NULL
Mark Ainsley Colijn,
Mark Ainsley Colijn
Institution: NULL
Email:
Stephanie Vrijsen,
Stephanie Vrijsen
Institution: NULL
Email:
Ping Yee Billie Au,
Ping Yee Billie Au
Institution: NULL
Email:
Rania Abou El Asrar,
Rania Abou El Asrar
Institution: NULL
Email:
Marine Houdou,
Marine Houdou
Institution: NULL
Email:
Chris Van den Haute,
Chris Van den Haute
Institution: NULL
Email:
Justyna Sarna,
Justyna Sarna
Institution: NULL
Email:
Greg Montgomery,
Greg Montgomery
Institution: NULL
Email:
Peter Vangheluwe
Peter Vangheluwe
Institution: NULL
Email:
<jats:title>Abstract</jats:title><jats:p>Biallelic (autosomal recessive) pathogenic variants in <jats:italic>ATP13A2</jats:italic> cause a form of juvenile-onset parkinsonism, termed Kufor-Rakeb syndrome. In addition to motor symptoms, a variety of other neurological and psychiatric symptoms may occ...
More
<jats:title>Abstract</jats:title><jats:p>Biallelic (autosomal recessive) pathogenic variants in <jats:italic>ATP13A2</jats:italic> cause a form of juvenile-onset parkinsonism, termed Kufor-Rakeb syndrome. In addition to motor symptoms, a variety of other neurological and psychiatric symptoms may occur in affected individuals, including supranuclear gaze palsy and cognitive decline. Although psychotic symptoms are often reported, response to antipsychotic therapy is not well described in previous case reports/series. As such, we describe treatment response in an individual with Kufor-Rakeb syndrome-associated psychosis. His disease was caused by a homozygous novel loss-of-function <jats:italic>ATP13A2</jats:italic> variant (NM_022089.4, c.1970_1975del) that was characterized in this study. Our patient exhibited a good response to quetiapine monotherapy, which he has so far tolerated well. We also reviewed the literature and summarized all previous descriptions of antipsychotic treatment response. Although its use has infrequently been described in Kufor-Rakeb syndrome, quetiapine is commonly used in other degenerative parkinsonian disorders, given its lower propensity to cause extrapyramidal symptoms. As such, quetiapine should be considered in the treatment of Kufor-Rakeb syndrome-associated psychosis when antipsychotic therapy is deemed necessary.</jats:p>
Less
Posted 2 months ago
u.Tharusha Udumulla,
u.Tharusha Udumulla
Institution: fgfg
Posted 2 months ago
u.Tharusha Udumulla,
u.Tharusha Udumulla
Institution: fgfg
Posted 2 months ago
<jats:p>Effective management of the COVID-19 pandemic requires widespread and frequent testing of the population for SARS-CoV-2 infection. Saliva has emerged as an attractive alternative to nasopharyngeal samples for surveillance testing as it does not require specialized personnel or materials for ...
More
<jats:p>Effective management of the COVID-19 pandemic requires widespread and frequent testing of the population for SARS-CoV-2 infection. Saliva has emerged as an attractive alternative to nasopharyngeal samples for surveillance testing as it does not require specialized personnel or materials for its collection and can be easily provided by the patient. We have developed a simple, fast, and sensitive saliva-based testing workflow that requires minimal sample treatment and equipment. After sample inactivation, RNA is quickly released and stabilized in an optimized buffer, followed by reverse transcription loop-mediated isothermal amplification (RT-LAMP) and detection of positive samples using a colorimetric and/or fluorescent readout. The workflow was optimized using 1,670 negative samples collected from 172 different individuals over the course of 6 months. Each sample was spiked with 50 copies/μL of inactivated SARS-CoV-2 virus to monitor the efficiency of viral detection. Using pre-defined clinical samples, the test was determined to be 100% specific and 97% sensitive, with a limit of detection of 39 copies/mL. The method was successfully implemented in a CLIA laboratory setting for workplace surveillance and reporting. From April 2021-February 2022, more than 30,000 self-collected samples from 755 individuals were tested and 85 employees tested positive mainly during December and January, consistent with high infection rates in Massachusetts and nationwide.</jats:p>
Less
Posted 2 months ago
Gemechu Feyisa Yadeta
Gemechu Feyisa Yadeta
Institution: Physics Department, College of Natural and Computational Sciences, Mattu University, Mattu
Email: info@rnfinity.com
In this work, the alpha particle-induced reaction on Cadmium-116 in the energy range 20-40 MeV has been studied. The excitation function for the following reaction channels of this type have been studied in the energy range of 15 MeV-40 MeV are; 48-Cd-116(α, n) 50-Sn-119. This reaction has a total ...
More
In this work, the alpha particle-induced reaction on Cadmium-116 in the energy range 20-40 MeV has been studied. The excitation function for the following reaction channels of this type have been studied in the energy range of 15 MeV-40 MeV are; 48-Cd-116(α, n) 50-Sn-119. This reaction has a total number of exciton six, number of neutron one and number of holes also one. 48-Cd-116(α, 2n + p) 49-In-117. In this reaction (TD = 10, Ex1 = 3 and Ex2 = 3). 48-Cd-116(α, 3n) 50-Sn-117. The exciton number of this reaction is (TD = 10, Ex1 = 3 and Ex2 = 3) 48-Cd-116(α, 3n + p) 49-In-116. It has an exciton number of (TD = 12 Ex1 = 4 and Ex2 = 4) 48-Cd-116(α, n + α) 48-Cd-115. This reaction has (TD = 14, Ex1 = 1, Ex2 = 5 and Ex3 = 4) were studied and comparative analysis was performed for reaction channels of 116-Cd target nuclei. The experimentally measured excitation functions obtained from the EXFOR data source, IAEA, were compared with the theoretical calculations with and without the inclusion of pre-equilibrium emission of particles, made by the COMPLET code. The level density parameter is varied to obtain good agreement between the calculated and measured data with minimum effort on the fitting parameter.
Less
Posted 6 months ago
Houda Chakib,
Houda Chakib
Institution: Data4Earth Laboratory, Faculty of Sciences and Technics
Email: houda.chakib@yahoo.fr
Najlae Idrissi,
Najlae Idrissi
Institution: 1Data4Earth Laboratory, Faculty of Sciences and Technics
Email: n.idrissi@usms.ma
Oussama Jannani
Oussama Jannani
Institution: Data4Earth Laboratory, Faculty of Sciences and Technics
Email: o.jannani@gmail.com
In recent years, image compression techniques have received a lot of attention from researchers as the number of images at hand keep growing. Digital Wavelet Transform is one of them that has been utilized in a wide range of applications and has shown its efficiency in image compression field. Moreo...
More
In recent years, image compression techniques have received a lot of attention from researchers as the number of images at hand keep growing. Digital Wavelet Transform is one of them that has been utilized in a wide range of applications and has shown its efficiency in image compression field. Moreover, used with other various approaches, this compression technique has proven its ability to compress images at high compression ratios while maintaining good visual image quality. Indeed, works presented in this paper deal with mixture between Deep Learning algorithms and Wavelets Transformation approach that we implement in different color spaces. In fact, we investigate RGB and Luminance/Chrominance YCbCr color spaces to develop three image compression models based on Convolutional Auto-Encoder (CAE). In order to evaluate the models’ performances, we used 24 raw images taken from Kodak database and applied the approaches on every one of them and compared achieved experimental results with those obtained using standard compression method. We draw this comparison in terms of performance parameters: Structural Similarity Index Metrix SSIM, Peak Signal to Noise Ratio PSNR and Mean Square Error MSE. Reached results indicates that with proposed schemes we gain significate improvement in distortion metrics over traditional image compression method especially SSIM parameter and we managed to reduce MSE values over than 50%. In addition, proposed schemes output images with high visual quality where details and textures are clear and distinguishable.
Less
Posted 1 year ago
Huan -Yu Chen,
Huan -Yu Chen
Institution: Department of Computer Science and Information Engineering, National Taichung University of Science and Technology
Email: info@rnfinity.com
Chuen-Horng Lin,
Chuen-Horng Lin
Institution: Department of Computer Science and Information Engineering, National Taichung University of Science and Technology
Email: info@rnfinity.com
Jyun-Wei Lai,
Jyun-Wei Lai
Institution: Department of Computer Science and Information Engineering, National Taichung University of Science and Technology
Email: info@rnfinity.com
Yung-Kuan Chan
Yung-Kuan Chan
Institution: Department of Management Information Systems, National Chung Hsing University
Email: info@rnfinity.com
This paper proposes a multi–convolutional neural network (CNN)-based system for the detection, tracking, and recognition of the emotions of dogs in surveillance videos. This system detects dogs in each frame of a video, tracks the dogs in the video, and recognizes the dogs’ emotions. The system ...
More
This paper proposes a multi–convolutional neural network (CNN)-based system for the detection, tracking, and recognition of the emotions of dogs in surveillance videos. This system detects dogs in each frame of a video, tracks the dogs in the video, and recognizes the dogs’ emotions. The system uses a YOLOv3 model for dog detection. The dogs are tracked in real time with a deep association metric model (DeepDogTrack), which uses a Kalman filter combined with a CNN for processing. Thereafter, the dogs’ emotional behaviors are categorized into three types—angry (or aggressive), happy (or excited), and neutral (or general) behaviors—on the basis of manual judgments made by veterinary experts and custom dog breeders. The system extracts sub-images from videos of dogs, determines whether the images are sufficient to recognize the dogs’ emotions, and uses the long short-term deep features of dog memory networks model (LDFDMN) to identify the dog’s emotions. The dog detection experiments were conducted using two image datasets to verify the model’s effectiveness, and the detection accuracy rates were 97.59% and 94.62%, respectively. Detection errors occurred when the dog’s facial features were obscured, when the dog was of a special breed, when the dog’s body was covered, or when the dog region was incomplete. The dog-tracking experiments were conducted using three video datasets, each containing one or more dogs. The highest tracking accuracy rate (93.02%) was achieved when only one dog was in the video, and the highest tracking rate achieved for a video containing multiple dogs was 86.45%. Tracking errors occurred when the region covered by a dog’s body increased as the dog entered or left the screen, resulting in tracking loss. The dog emotion recognition experiments were conducted using two video datasets. The emotion recognition accuracy rates were 81.73% and 76.02%, respectively. Recognition errors occurred when the background of the image was removed, resulting in the dog region being unclear and the incorrect emotion being recognized. Of the three emotions, anger was the most prominently represented; therefore, the recognition rates for angry emotions were higher than those for happy or neutral emotions. Emotion recognition errors occurred when the dog’s movements were too subtle or too fast, the image was blurred, the shooting angle was suboptimal, or the video resolution was too low. Nevertheless, the current experiments revealed that the proposed system can correctly recognize the emotions of dogs in videos. The accuracy of the proposed system can be dramatically increased by using more images and videos for training the detection, tracking, and emotional recognition models. The system can then be applied in real-world situations to assist in the early identification of dogs that may exhibit aggressive behavior.
Less
Posted 1 year ago
Joan Danielle K. Ongchoco,
Joan Danielle K. Ongchoco
Institution: Department of Psychology
Email: info@rnfinity.com
Madeline Gedvila,
Madeline Gedvila
Institution: Department of Psychology
Email: info@rnfinity.com
Wilma A. Bainbridge
Wilma A. Bainbridge
Institution: Department of Psychology
Email: info@rnfinity.com
Time is the fabric of experience — yet it is incredibly malleable in the mind of the observer: seeming to drag on, or fly right by at different moments. One of the most influential drivers of temporal distortions is attention, where heightened attention dilates subjective time. But an equally impo...
More
Time is the fabric of experience — yet it is incredibly malleable in the mind of the observer: seeming to drag on, or fly right by at different moments. One of the most influential drivers of temporal distortions is attention, where heightened attention dilates subjective time. But an equally important feature of subjective experience involves not just the objects of attention, but also what information gets tagged to be remembered or forgotten in the first place, independent of attention (i.e. intrinsic image memorability). Here we test how memorability influences time perception. Observers viewed scenes in an oddball paradigm, where the last scene could be a forgettable “oddball” amidst memorable ones, or vice versa. Subjective time dilation occurred only for forgettable oddballs, but not memorable ones — demonstrating an oddball effect where the oddball did not differ in low-level visual features, image category, or even subjective memorability. But more importantly, these results emphasize how memory can interact with temporal experience: forgettable endings amidst memorable sequences dilate our experience of time.
Less
Posted 1 year ago
Abstract This article presents a fast parallel lossless technique and a lossy image compression technique for 16-bit single-channel images. Nowadays, such techniques are “a must” in robotics and other areas where several depth cameras are used. Since many of these algorithms need to be run in lo...
More
Abstract This article presents a fast parallel lossless technique and a lossy image compression technique for 16-bit single-channel images. Nowadays, such techniques are “a must” in robotics and other areas where several depth cameras are used. Since many of these algorithms need to be run in low-profile hardware, as embedded systems, they should be very fast and customizable. The proposal is based on the consideration of depth images as surfaces, so the idea is to split the image into a set of polynomial functions that each describes a part of the surface. The developed algorithm herein proposed can achieve a similar—or better—compression rate and especially higher speed rates than the existing techniques. It also has the potential of being fully parallelizable and to run on several cores. This feature, compared to other approaches, makes it useful for handling and streaming multiple cameras simultaneously. The algorithm is assessed in different situations and hardware. Its implementation is rather simple and is carried out with LIDAR captured images. Therefore, this work is accompanied by an open implementation in C++.
Less
Posted 1 year ago
Baekcheon Seong,
Baekcheon Seong
Institution: Yonsei University
Email: info@rnfinity.com
Abstract Several image-based biomedical diagnoses require high-resolution imaging capabilities at large spatial scales. However, conventional microscopes exhibit an inherent trade-off between depth-of-field (DoF) and spatial resolution, and thus require objects to be refocused at each lateral locati...
More
Abstract Several image-based biomedical diagnoses require high-resolution imaging capabilities at large spatial scales. However, conventional microscopes exhibit an inherent trade-off between depth-of-field (DoF) and spatial resolution, and thus require objects to be refocused at each lateral location, which is time-consuming. Here, we present a computational imaging platform, termed E2E-BPF microscope, which enables large-area, high-resolution imaging of large-scale objects without serial refocusing. This method involves a physics-incorporated, deep-learned design of binary phase filter (BPF) and jointly optimized deconvolution neural network, which altogether produces high-resolution, high-contrast images over extended depth ranges. We demonstrate the method through numerical simulations and experiments with fluorescently labeled beads, cells and tissue section, and present high-resolution imaging capability over a 15.5-fold larger DoF than the conventional microscope. Our method provides highly effective and scalable strategy for DoF-extended optical imaging system, and is expected to find numerous applications in rapid image-based diagnosis, optical vision, and metrology.
Less
Posted 1 year ago