DNG in HCI International 2022 / オンライン開催されるHCI International 2022にて研究発表を行います

2022. 06. 20

We will present our work at the 24rd international conference on HCI International 2022 !
デジタルネイチャー研究室は, オンライン開催されるHCI International 2022にて11件の口頭発表と3件のポスター発表を行います.

Official Web : http://2022.hci.international/

会期:2022/06/26〜2022/07/01 (VIRTUAL)


発表プロジェクト

 

◆◆Parallel sessions with paper presentations◆◆

1. Retinal Viewfinder: Preliminary Study of Retinal Projection-Based Electric Viewfinder for Camera Devices

Project page: Retinal Viewfinder

Authors: Ippei Suzuki, Yuta Itoh, Yoichi Ochiai

Presentation details: Parallel sessions with paper presentations

2022/06/27, MON
22:00 – 24:00 (JST – Tokyo)  / 15:00 – 17:00 (CEST – Stockholm)
Human-Computer Interaction
Novel Input and Output Techniques – I

発表の詳細につきましてはこちらのページをご確認ください.For more information, please click here.

2. Visualizing the Electroencephalography Signal Discrepancy When Maintaining Social Distancing: EEG-Based Interactive Moiré Patterns

Visualizing the Electroencephalography Signal Discrepancy When Maintaining Social Distancing: EEG-Based Interactive Moiré Patterns

Project page: Visualizing the Electroencephalography Signal Discrepancy When Maintaining Social Distancing

Authors: Jingjing Li, Ye Yang, Zhexin Zhang, Yinan Zhao, Vargas Meza Xanat, Yoichi Ochiai

Presentation details: Parallel sessions with paper presentations

2022/06/29, WED
19:00 – 21:00 (JST – Tokyo)  / 12:00 – 14:00 (CEST – Stockholm)
Human-Computer Interaction
Contents Technology

発表の詳細につきましてはこちらのページをご確認ください.For more information, please click here.

3. Electroencephalography and Self-Assessment Evaluation of Engagement with Online Exhibitions: Case Study of Google Arts & Culture

Project page: EEG and Self-Assessment Evaluation of Engagement with Online Exhibitions

Authors: Jingjing Li, Japan; Chengbo Sun, P.R. China; Vargas Meza Xanat, Yoichi Ochiai,

Presentation details: Parallel sessions with paper presentations

2022/06/29, WED
19:00 – 21:00 (JST – Tokyo)  / 12:00 – 14:00 (CEST – Stockholm)
Culture and Computing
Experience Design in Arts

発表の詳細につきましてはこちらのページをご確認ください.For more information, please click here.

4. Customizable Text-to-Image Modeling by Contrastive Learning on Adjustable Word-visual Pairs

 Co-creation with AI is trending and AI-generation of images from textual descriptions has shown advanced and attractive capabilities. However, commonly trained machine-learning models or built AI-based systems may have deficient points to generate satisfied results for personal usage or novice users of painting or AI co-creation, maybe because of deficient understanding of personal textual expressions or low customization of trained text-to-image machine learning models. Therefore, we assist in creating flexible and diverse visual contents from textual descriptions, by developing neural-networks models with machine learning. In modeling, we generate synthesized images using word-visual co-occurrence by Transformer model and synthesize images by decoding visual tokens. To improve visual and textual expressions and their relevance with more diversities, we utilize contrastive learning applying on texts, images, or pairs of texts and images. In the experimental results of a dataset of birds, we showed that the rendering quality was required of models with some scale neural-networks, and necessary training process with fined training by applying relatively low learning rates until the end of training. We further showed contrastive learning was possible for improvement of visual and textual expressions and their relevance.

Publication Pagehttps://digitalnature.slis.tsukuba.ac.jp/2022/06/text-to-image-modeling_hcii2022/

Authors: Jun-Li Lu, Yoichi Ochiai

Presentation details: Parallel sessions with paper presentations

2022/06/29, WED
19:00 – 21:00 (JST – Tokyo)  / 12:00 – 14:00 (CEST – Stockholm)
Artificial Intelligence in HCI
Human-AI Collaboration

発表の詳細につきましてはこちらのページをご確認ください.For more information, please click here.

5. Calmbots: Exploring Madagascar Cockroaches as Living Ubiquitous Interfaces

We introduce Calmbots, insect-based interfaces comprising multiple functions (transportation, display, drawing, or haptics) for use in human living spaces by taking advantage of insects’ capabilities. We utilized Madagascar hissing cockroaches as robots because of advantages such as mobility, strength, hiding, and self-sustaining abilities. Madagascar hissing cockroaches, for instance, can be controlled to move on uneven cable-lines floors and push light-weight objects such as tablespoon. We controlled the cockroaches’ movement using electrical stimulation and developed a system for tracking and communicating with their backpacks using augmented reality markers and a radio-based station, the steps of controlling multiple cockroaches for reaching their goals and transporting objects, and customized optional parts. Our method demonstrated effective control over a group of three or five cockroaches, with, at least, 60% success accuracy in dedicated experimental environments involving over forty trials for each test. Calmbots could move on carpeted or cable-lines floor and did not become desensitized to stimulation under a certain break interval.

Project page: Calmbots

Authors: Yuga Tsukuda, Daichi Tagami, Masaaki Sadasue, Shieru Suzuki, Jun-Li Lu, Yoichi Ochiai

Presentation details: Parallel sessions with paper presentations

2022/06/30, THU
17:00 – 19:00 (JST – Tokyo)  / 10:00 – 12:00 (CEST – Stockholm)
Universal Access in Human-Computer Interaction
Cross-cultural Designs and Accessibility Technologies During COVID-19 – I

発表の詳細につきましてはこちらのページをご確認ください.For more information, please click here.

6. Transformation of Plants into Polka Dot Arts: Kusama Yayoi as an Inspiration for Deep Learning

Project page: Transformation of Plants into Polka Dot Arts

Authors: Jingjing Li, Xiaoyang Zheng, Jun-Li Lu, Vargas Meza Xanat, Yoichi Ochiai

Presentation details: Parallel sessions with paper presentations

2022/06/30, THU
17:00 – 19:00 (JST – Tokyo)  / 10:00 – 12:00 (CEST – Stockholm)
Universal Access in Human-Computer Interaction
Cross-cultural Designs and Accessibility Technologies During COVID-19 – I

発表の詳細につきましてはこちらのページをご確認ください.For more information, please click here.

7. EMS-Supported Throwing: Preliminary Investigation on EMS-Supported Training of Movement Form

 We propose a learning support system with extremely low latency and low cognitive load to correct the user’s motion. In previous studies, visual and haptic feedback has been mainly used to support motion learning, but there is a delay between the presentation of the stimulus and the modification of the action. However, this delay is due to reaction time and cognitive load and is difficult to shorten. This study developed a system that solves this problem by combining electric muscle stimulation (EMS) and prediction. This study proposed a system for solving this problem by combining Electrical Muscle Stimulation (EMS) and prediction of the user’s motion. In order to improve the control ability of the underhand throwing, we used the system to tell the subject the release point during the underhand throwing motion and verified the learning effect. This experiment revealed that EMS tended to be effective in teaching the ball’s release point, although it did not improve the control ability of the underhand throwing motion. In addition, although the effectiveness of EMS for motion learning was not yet fully evaluated, this study showed the possibility of applying EMS to support learning of motion.

Publication page: https://digitalnature.slis.tsukuba.ac.jp/2022/06/ems-supported-throwing-hcii/

Authors: Ryogo Niwa, Kazuya Izumi, Shieru Suzuki, Yoichi Ochiai

Presentation details: Parallel sessions with paper presentations

2022/06/30, THU
17:00 – 19:00 (JST – Tokyo)  / 10:00 – 12:00 (CEST – Stockholm)
Universal Access in Human-Computer Interaction
Cross-cultural Designs and Accessibility Technologies During COVID-19 – I

発表の詳細につきましてはこちらのページをご確認ください.For more information, please click here.

8. Dance through Visual Media: The Influence of COVID-19 on Dance Artists

Among the arts, dance is regarded as a ”dynamic spatiotemporal art using the body as a medium,” and it is considered better to appreciate it live[10]. By appreciating dance work live, the theme, movement, and impressions of the work are communicated[12]. However, because of the spread of COVID-19, the first emergency state was declared in Japan in March 2020. Under these circumstances, theaters were closed because of the risk of infection, and all dance performances were cancelled. Live dance appreciation is no longer possible, and dance performances using visual media have soared . Therefore, to clarify how Japanese dance artists have shifted to video distribution in response to the spread of COVID-19 and how this shift has been percieved, we first conducted semi-structured interviews with dance artists who have engaged in video distribution of dance owing to the spread of COVID-19. The interview revealed the merits and demerits of video-delivering dance, problems that emerged,, points particular to video-delivering dance, and new physical sensations obtained by video-delivering dance. Based on these results, we suggest room for improvement and then discuss how to provide better computer support.

Publication page: https://digitalnature.slis.tsukuba.ac.jp/2022/06/dance-through-visual-media-hcii/

Authors: Ryosuke Suzuki, Yoichi Ochiai

Presentation details: Parallel sessions with paper presentations

2021/06/30, THU
19:30 – 21:30 (JST – Tokyo)  / 12:30 – 14:30 (CEST – Stockholm)
Universal Access in Human-Computer Interaction
Cross-cultural Designs and Accessibility Technologies During COVID-19 – II

発表の詳細につきましてはこちらのページをご確認ください.For more information, please click here.

9. Designing AI-support VR by Self-supervised and Initiative Selective Supports

To provide flexible support ways and intelligent support contents for users in VR contexts, compared with the existing support ways of either single or combination of sensing functions, e.g., support of gesture, head or body movement. In our proposal, to provide flexible support functions conditioned on VR contexts or user’s feedbacks, we propose to use a semi-automatic selection of interactive supports. In modeling of semi-selection by user’s feedbacks and VR contexts, we propose to evaluate the performance by consideration of both intelligent AI evaluation, based on data of users’ performance in VR, and user’s initiative feedbacks. Furthermore, to provide customizable or personalized estimation in the VR support, we propose to apply the machine learning of self-supervised learning. Therefore, we are able to train or retrain estimation models with efficiency of low-cost of data works, including reduction of data-labeling cost or reuse of existing models. We require to evaluate the timing of applying selection or modification of support ways, the balance of ratios of automatics or user-initiative due to user preference or experiences or smoothness of VR contexts, and even user awareness or understanding, etc. Further, we require to evaluate the scale, numbers, size, and limitation of data or training that are needed for stable, accurate, and useful estimations of VR support.

Publication Page: https://digitalnature.slis.tsukuba.ac.jp/2022/06/designing-ai-support-vr-hcii/

Authors: Ritwika Mukherjee, Jun-Li Lu, Yoichi Ochiai

Presentation details: Parallel sessions with paper presentations

2021/06/30, THU
17:00 – 19:00 (JST – Tokyo)  / 10:00 – 12:00 (CEST – Stockholm)
Universal Access in Human-Computer Interaction
Cross-cultural Designs and Accessibility Technologies During COVID-19 – II

発表の詳細につきましてはこちらのページをご確認ください.For more information, please click here.

10. Indoor Auto-navigate System for Electric Wheelchairs in a Nursing Home

We are currently living in an age of COVID, where we wish to reduce physical contact as much as possible. It is even more important for patients who live in nursing homes and need wheelchairs. It is noticeable that people who live in nursing homes usually have an elder average age, and are more likely to have some underlying disease. Therefore they need extra care to resist COVID. As we all know, the most common and effective countermeasure against COVID is to avoid close contact. However, for most people who lives in a nursing home, there are plenty of daily activities that are mandatory for them. They have to spend considerable time moving on a wheel chair with an assistant pushing the wheelchair. Which made the assistant and the user a close contact to each other. We plan to design an auto-navigation computer system for electric wheelchairs. So it can be possible for electric wheelchair users to go to various places in nursing home without an assistant aside, reducing the risk of infection, as well as the human resource needed.

Publication page: https://digitalnature.slis.tsukuba.ac.jp/2022/06/indoor-auto-navigate-system-hcii/

Authors: Zhexin Zhang, Jun-Li Lu, Yoichi Ochiai

Presentation details: Parallel sessions with paper presentations

2021/06/30, THU
17:00 – 19:00 (JST – Tokyo)  / 10:00 – 12:00 (CEST – Stockholm)
Universal Access in Human-Computer Interaction
Cross-cultural Designs and Accessibility Technologies During COVID-19 – II

発表の詳細につきましてはこちらのページをご確認ください.For more information, please click here.

 


◆◆POSTER PRESENTATION:2022/06/26~07/01  (All Day)◆◆

 

ポスタープレゼンテーションの詳細につきましてはこちらのページをご確認ください.

https://2022.hci.international/poster-presentations.html

 

1. Simulation Object Edge Haptic Feedback in Virtual Reality based on Dielectric Elastomer

 With reduced prices of VR device, consumers are increasingly purchasing VR devices. As a result of this, increase in the popularity, device makers are seeking new methods for VR interaction. Current haptic feedback gloves such as CyberGrasp, Haptx glove, and Demo focus on the positioning of the hand in the virtual environment, as well as the feedback of the force haptics of the hand on virtual objects. Users can use haptic gloves to determine the shape and size of virtual objects. However, while touching some tiny objects in the virtual environment, haptic gloves achieve cannot provide a good user experience. In this paper, I propose a new method to enhance haptic resolution from centimeters to millimeters, which can improve user experience with haptic device.
The goal of this study is to create a device that can be worn on the fingertip, which can reflect the subtle haptic sensations received by the user’s fingertip in the virtual space built on UE4. This device enhances the user’s haptic experience in a virtual space.

Publication page: https://digitalnature.slis.tsukuba.ac.jp/2022/06/simulation-object-edge-haptic-feedback-hcii/

Authors: Jiyao Chen, P.R. China; Jun-Li Lu, Yoichi Ochiai

2. How See the Colorful Scenery?: The Color-centered Descriptive Text Generation for the Visually Impaired in Japan

Project page: How See the Colorful Scenery?

Authors: Chieko Nishimura, Naruya Kondo, Takahito Murakami, Maya Torii, Ryogo Niwa, Yoichi Ochiai

3. Development and Evaluation of Systems to Enjoy a Wedding Reception for People with Low Visions

Project page: Low-Vision Wedding

Authors: Masaaki Sadasue, Ippei Suzuki, Kosaku Namikawa, Kengo Tanaka, Chieko Nishimura, Masaki Okamura, Yoichi Ochiai


研究室概要

名称 : デジタルネイチャー研究室
代表者 : 准教授 落合 陽一
所在地 : 茨城県つくば市春日1-2
研究内容 : 波動工学、デジタルファブリケーション、人工知能技術を用いた空間研究開発
URL : https://digitalnature.slis.tsukuba.ac.jp/

代表略歴

落合陽一 Yoichi Ochiai

1987生,2015年東京大学学際情報学府博士課程修了(学際情報学府初の短縮終了),博士(学際情報学).日本学術振興会特別研究員DC1,米国Microsoft ResearchでのResearch Internなどを経て,2015年より筑波大学図書館情報メディア系助教 デジタルネイチャー研究室主宰.2015年,Pixie Dust Technologies.incを起業しCEOとして勤務.2017年から2019年まで筑波大学学長補佐,2017年から大阪芸術大学客員教授,2020年デジタルハリウッド大学特任教授,金沢美術工芸大学客員教授,2021年4月から京都市立芸術大学客員教授を兼務.2017年12月より,ピクシーダストテクノロジーズ株式会社による筑波大学デジタルネイチャー推進戦略研究基盤代表及び准教授を兼務.2020年6月デジタルネイチャー開発研究センター・センター長就任.専門はCG,HCI,VR,視・聴・触覚提示法,デジタルファブリケーション,自動運転や身体制御.

本件に関するお問い合わせ先

名称    : 筑波大学デジタルネイチャー研究室
Email : contact<-at->digitalnature.slis.tsukuba.ac.jp <-at->を@に置き換えてください

News