We developed VisShare, a high-performance and cross-platform visualization library, to address the two challenges. VisShare incorporates two key components: a high-performance engine (HPE), and a full-featured API. The framework of VisShare (Fig. 1a) depicts the workflow of creating and sharing visualizations between users and communities. Based on VisShare, visualizations of large and heterogeneous biomedical data representation can be easily disseminated to inclusive publishing platforms through integrating VisShare into web and native apps, furthermore accompanied by interactive user experience. The superior utility brings great values to diverse stakeholders from research and publishing community towards knowledge sharing.
In this paper, we present an artificial intelligence-based musical composition algorithm for generating harmonic and various arpeggios based on given chords in real-time, which incorporates a recurrent neural network (RNN) with the gated recurrent unit (GRU) and a tetrahedral context-sensitive L-system (TCSL). RNN can play a significant role of learning inherent harmony from arpeggio datasets and providing probabilistic prediction which suggests the selection of the next note and the lengths of the selected notes. We also establish TCSL model based on a tetrahedron model utilizing seven interval operators and production rules to increase the variety of generation. TCSL is responsible for generating one note at each iteration by respectively requesting probabilistic predication from RNN, calculating optional notes and determining target note. Our experiments where we trained two RNNs for TCSL generation model indicated that the proposed algorithm has advantages in overcoming the obstacles of achieving inherent global harmony as well as the variety of generation in current computer-aided arpeggio composition algorithms. Our research attempts to extend deep learning model (DLM) towards the design space for interactive composition systems.
Social robots are increasingly emerging to the public regarding the advances in artificial intelligence and human-robot technologies. Research community of human-robot interaction (HRI) has provided insights that social robots are supposed to survive in social circumstances with the capacities of adapting and learning through interactions with human and environments. However, issues are confusing designers in the robotic industry encountering the complexity and ambiguity of designing adaptive interactions for social robots. It is therefore desirable to make the design method accessible which empowers designers to implement tangible robot prototypes advanced in creative and technical flexibility. This research rises to address the challenges by generating a practical tool utilizing design patterns for designing interactions connecting human and social robots. Design patterns and the iterative framework could serve as a practical tool to decrease the complexity and ambiguity of design issues. We developed two social robot prototypes (Neko and Chirp) to demonstrate the empirical improvements in constructing well-organized interactions and elaborating adaptive reactions.
Recently in China, the scale of university campus has increasingly expanded, both campus buildings and road networks become complex and compound. Besides this phenomenon and allowing for the uniqueness of campus, how to effectively address the issue of campus transportation deserves our deep thinking. This paper took Jiading Campus of Tongji University as a practical case, after some design research on the campus, it turned out that slow traffic service system was a proper campus transportation mode for Jiading campus.This campus slow traffic service system was designed to construct on a digital touchpoints architecture, based on service-oriented landmarks (SOLs). This paper aims at exploring how to design these digital touchpoints and organize their association to provide people better campus slow traffic services.
Shi J., Ma K. (2018) Digital Touchpoints in Campus Slow Traffic Service System. In: Stanton N. (eds) Advances in Human Aspects of Transportation. AHFE 2017. Advances in Intelligent Systems and Computing, vol 597. Springer, Cham.
Comparative genetic interaction mapping reveals functional crosstalk between distinct bioprocesses
Dan Chen*, Wei Xu*, Yu Wang, Yongshen Ye, Yue Wang, Miao Yu, Jinghu Gao, Jielin Wei, Yiming Dong, Honghua Zhang, Ke Ma, Wenqing Cheng, Shu Wang, Barth D. Grant, Chad L. Myers, Anbing Shi, and Tian Xia.
submitted to Journal in peer-review
Multifactorial deep learning reveals pan-cancer genomic tumor clusters with distinct immunogenomic landscape and response to immunotherapy
Feng Xie, Jianjun Zhang, Jiayin Wang, Alexandre Reuben3, Wei Xu1, Xin Yi, Frederick S. Varn7, Yongsheng Ye, Junwen Cheng, Miao Yu, Yue Wang, Mingchao Xie, Peng Du, Ke Ma, Penghui Zhou, Sheng-li Yang, Yaobing Chen, Guoping Wang, Xuefeng Xia, Zhongxing Liao, John V. Heymach, Ignacio Wistuba, P. Andrew Futreal, Kai Ye, Chao Cheng, Tian Xia.
The motivation of our research is to explore the gestural interaction with water for musical expression. As the first stage towards this end, we establish Ripples, an aquarium-type digital musical instrument, in which players manipulate predefined gestures to improvise digital music. This paper presents the design of TUI and implementation of Ripples as well as a user study observing and recording the players’ gestures in the water with a camera. The results could help for the future work to define a gesture vocabulary and provide guidelines for TEI community for the rapid creation of musical tangible interface.
In biomedical research, visualization is extensively used for depicting all kinds of data and analyses. Our Signichat (SC) platform aims will yield an online community for facilitating collaboration, knowledge sharing and integration, and training and education based on the image-formatted biomedical information. That impact will be multiplied by laboratories worldwide using SC to bring people with diverse backgrounds and interests together to work on the same goal to tackle complex diseases such as cancer with biomedical ‘big data’ information.
We propose the development of SC, an image-based scientific research social network platform, to help basic scientists, clinicians, and the public to grapple with large amounts of cancer data in a community-based and collaboration-driven fashion. Our core design concepts of SC are to use a collection of images, called the SC Conversation Thread (SC-CT), to depict biomedical information and to use social networking features so that biomedical researchers can, through the SC-CT, connect with, help, and collaborate with one another based on the information encapsulated in the images.
LegoBeats: A Collaborative Drum Machine for Composing Rhythmic Patterns and Dynamic Tempo, Timbres and Effects
Drum machine was originally developed to generate percussive patterns for musical performance. In recent years, digital drum machines are increasingly applied to collaborative music composition where rhythmic percussion sounds are simultaneously manipulated by multiple users. The collaborative drum machine requires not only sequence creation, other features like timbres, tempo, filters and effects should also receive implementation. Moreover, how to apply these features to tangible interfaces and how to design interaction to seamlessly match the affordance of objects deserve additional consideration. The goal of our research is to further explore the above listed questions.
This paper presents LegoBeats, a collaborative drum machine, where multiple users simultaneously manipulate lego bricks to generate, modify and perform dynamic rhythmic patterns, timbres, tempos and effects through a tangible interface. LegoBeats demonstrated our research addressing the connection among the affordance of objects, interaction and the additional features of drum machine.
Author: Ke Ma
Submitted to ICMC/EMW 2017, 43rd International Computer Music Conference + 6th Electronic Music Week, 16-20 October 2017, Shanghai, China.
In general, the public is accustomed to achieve improvised music in many ways, instead of create compositions by themselves. The reason is that professional composing skills are required to handle the current real-time composition systems. Further, these systems are difficult to operate. To tackle the two obstacles, we have designed and implemented a RealSense-based gestural real-time composition system in order to involve users without special training into the pleasure of music composition.
We applied the recently delivered RealSense technology into gesture recognition. As a result, harmonic and diverse melodies can be generated by operating normally-used nature gestures. Our system consists of three modules: (1) gesture interaction. Users’ gestures are captured and recognized in real-time by 3D RealSense camera. The recognized results are mapped to scale chord inputs. (2) real-time composition. Along with the chord progressions, arpeggio melodies, walking bass lines and drum tracks are automatically generated by means of computer-aided algorithms. Firstly, harmonic and various arpeggios are generated based on a novel deep learning based algorithm combining gated neural unit (GRU) and tetrahedral context-sensitive L-system. Secondly, a contour-based bass generation algorithm is utilized for generate the global coherent walking bass. (3) Music output. Export the MIDI and audio generations out of system.
In this paper, our system is capable of relieving the obstacles of novices to creating musically harmonic compositions. In addition to the assistance for novices, our system has promising potentials to be applied in improvisation, offering help for music learner and composers. We proposed an interactive approach of utilizing nature gestures to control chords for music generating, which can be extend to widely use in commercial scenarios.