Jun-Li Lu / 盧俊利

ALUMNI

2020-2022

Education

(このページは退任当時の情報で,今後は更新されません)
(This page will not be updated in the future.)

2013-2020: Ph.D. in Informatics, Graduate School of Informatics, Kyoto University
京都大学 情報学研究科社会情報学専攻博士後期課程修了 指導教員:田中克己教授
2008-2010: M.S. in Electrical Engineering, National Taiwan University, Taiwan
2005-2008: B.S., National Taipei University of Technology, Taiwan

PhD thesis:
“A Study on Resolution and Retrieval of Implicit Entity References in Microblogs”

JOURNALS:

1. Jun-Li Lu, Makoto P. Kato, Takehiro Yamamoto, Katsumi Tanaka
Searching for Microblogs Referring to Events by Deep Dynamic Query Strategies,
Journal of Information Processing, Vol.28, pp.320-332, May 2020

2. Jun-Li Lu, Makoto P. Kato, Takehiro Yamamoto, Katsumi Tanaka
Event Identification for Explicit and Implicit References on Microblogs,
Journal of Information Processing, Vol.25, pp.505-513, July 2017

3. Jun-Li Lu, Makoto P. Kato, Takehiro Yamamoto, Katsumi Tanaka
Entity Identification on Microblogs by CRF Model with Adaptive Dependency,
IEICE Transactions on Information and Systems, Vol.E99-D, No.9, pp.2295-2305, September 2016

PATENT:

4. Method for dispatching electric automobile with charging plan and system thereof,
Yu-Ching Hsu, Chai-Hien Gan, Shun-Neng Yang, Jun-Li Lu, Mi-Yen Yeh, Ming-Syan Chen
(Taiwan) R.O.C. patent number: I489401, June 2015

INTERNATIONAL CONFERENCE PAPERS (PEER-REVIEWED):

5. Hiroyuki Osone, Jun-Li Lu, Yoichi Ochiai
BunCho: AI Supported Story Co-Creation via Unsupervised Multitask Learning to Increase Writers' Creativity in Japanese, the 2021 ACM CHI Conference on Human Factors in Computing Systems (CHI 2021), originally Yokohama, Japan, https://doi.org/10.1145/3411763.3450391, May 2021

6. Jun-Li Lu, Hiroyuki Osone, Akihisa Shitara, Ryo Iijima, Bektur Ryskeldiev, Sayan Sarcar, Yoichi Ochiai
Personalized Navigation that Links Speaker's Ambiguous Descriptions to Indoor Objects for Low Vision People,
accepted in the 23rd International Conference on Human-Computer Interaction, July 2021

7. Zhexin Zhang, Jun-Li Lu, Yoichi Ochiai
A Customized VR Rendering with Neural-Network Generated Frames for Reducing VR Dizziness,
accepted in the 23rd International Conference on Human-Computer Interaction, July 2021

8. Jun-Li Lu, Makoto P. Kato, Takehiro Yamamoto, Katsumi Tanaka
Entity Identification on Microblogs by CRF Model with Adaptive Dependency,
IEEE/WIC/ACM International Conference on Web Intelligence, pp.333-340, December 2015 (Best Student Paper Award)

9. Jun-Li Lu, Ling-Yin Wei, Mi-Yen Yeh
Influence maximization in a social network in the presence of multiple influences and acceptances, International Conference on Data Science and Advanced Analytics, pp.230-236, October 2014.

10. Jun-Li Lu, Mi-Yen Yeh, Yu-Ching Hsu, Shun-Neng Yang, Chai-Hien Gan, Ming-Syan Chen
Operating Electric Taxi Fleets: a New Dispatching Strategy with Charging Plans,
IEEE International Electric Vehicle Conference, pp.1-8, March 2012

INTERNATIONAL WORKSHOP:

11. Jun-Li Lu, Hiroyuki Osone, Akihisa Shitara, Ryo Iijima, Bektur Ryskeldiev, Sayan Sarcar, Yoichi Ochiai
Personalized Navigation that Links Speaker's Ambiguous Descriptions to Indoor Objects for Low Vision People,
the ACM CHI 2021 Workshop on Design and Creation of Inclusive User Interactions Through Immersive Media, May 2021

12. Yuga Tsukuda, Naoto Nishida, Jun-Li Lu, Yoichi Ochiai
Insect-Computer Hybrid Speaker: Speaker using Chirp of the Cicada Controlled by Electrical Muscle Stimulation,
the ACM CHI 2021 Workshop on Design and Creation of Inclusive User Interactions Through Immersive Media, May 2021

INTERNATIONAL TALK:

13. Lupita Guillen Mandujano, Erdas Kuruc, Jun-Li Lu, Paola Sanoni, Vargas Meza Xanat
Pluriversal Design Transitions for Higher Education Motivated by COVID-19,
Panel Discussion in the 7th International Conference of the Immersive Learning Research Network (iLRN 2021), May 2021

Profile

Jun-Li Lu received his PhD degree in Informatics from Kyoto University, 2020. His research interests include Computer Vision, Natural Language Processing, Information Retrieval, and Machine Learning. コンピュータビジョン,自然言語処理,情報検索,機械学習などに興味を持っています.

He is a collaborative researcher in academic institutes and industries. From 2018, he worked in Advanced Smart Mobility Co., Ltd, Japan. From 2017, he worked in Institution for a Global Society Corporation, Japan. In 2010-2013, he worked in CITI, Academia Sinica, Taiwan.

He received Best Student Paper Award in IEEE/WIC/ACM International Conference on Web Intelligence, Singapore, 2015, for one of his PhD thesis work, and had one Industry Patent from Taiwan (ROC), 2015.

Lupita Guillen Mandujano, Erdas Kuruc, Jun-Li Lu, Paola Sanoni, Xanat Vargas Meza. “Panel Session–Pluriversal Design Transitions for Higher Education Motivated by COVID-19,” 2021 7th International Conference of the Immersive Learning Research Network (iLRN), 2021, pp. 1-3, doi: 10.23919/iLRN52045.2021.9459327.

Jiyao Chen, Jun-Li Lu,Yoichi Ochiai . (2022). Simulation Object Edge Haptic Feedback in Virtual Reality Based on Dielectric Elastomer. In: Stephanidis, C., Antona, M., Ntoa, S. (eds) HCI International 2022 Posters. HCII 2022. Communications in Computer and Information Science, vol 1581. Springer, Cham. https://doi.org/10.1007/978-3-031-06388-6_1

Zhexin Zhang, Jun-Li Lu, Yoichi Ochiai . (2022). Indoor Auto-Navigate System for Electric Wheelchairs in a Nursing Home. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. Novel Design Approaches and Technologies. HCII 2022. Lecture Notes in Computer Science, vol 13308. Springer, Cham. https://doi.org/10.1007/978-3-031-05028-2_36

Jingjing Li, Xiaoyang Zheng, Jun-Li Lu, Vargas Meza Xanat, Yoichi Ochiai . (2022). Transformation of Plants into Polka Dot Arts: Kusama Yayoi as an Inspiration for Deep Learning. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. Novel Design Approaches and Technologies. HCII 2022. Lecture Notes in Computer Science, vol 13308. Springer, Cham. https://doi.org/10.1007/978-3-031-05028-2_18

Jun-Li Lu,  Yoichi Ochiai. (2022). Customizable Text-to-Image Modeling by Contrastive Learning on Adjustable Word-Visual Pairs. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_30

Yuga Tsukuda, Daichi Tagami, Masaaki Sadasue, Shieru Suzuki, Jun-Li Lu, Yoichi Ochiai. (2022). Calmbots: Exploring Madagascar Cockroaches as Living Ubiquitous Interfaces. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. Novel Design Approaches and Technologies. HCII 2022. Lecture Notes in Computer Science, vol 13308. Springer, Cham. https://doi.org/10.1007/978-3-031-05028-2_35

Ritwika Mukherjee, Jun-Li Lu, Yoichi Ochiai. (2022). Designing AI-Support VR by Self-supervised and Initiative Selective Supports. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. User and Context Diversity. HCII 2022. Lecture Notes in Computer Science, vol 13309. Springer, Cham. https://doi.org/10.1007/978-3-031-05039-8_17

Lu, JL., Ochiai, Y. (2022). Customizable Text-to-Image Modeling by Contrastive Learning on Adjustable Word-Visual Pairs. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_30

Yuga Tsukuda, Daichi Tagami, Masaaki Sadasue, Shieru Suzuki, Jun-Li Lu and Yoichi Ochiai. 2022. Calmbots: Exploring Possibilities of Multiple Insects with On-hand Devices and Flexible Controls as Creation Interfaces. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI EA ’22). Association for Computing Machinery, New York NY United States, Article No.9, Pages 1–13, https://doi.org/10.1145/3491101.3516387

Zhang Z., Lu JL., Ochiai Y. (2021) A Customized VR Rendering with Neural-Network Generated Frames for Reducing VR Dizziness. In: Stephanidis C., Antona M., Ntoa S. (eds) HCI International 2021 – Posters. HCII 2021. Communications in Computer and Information Science, vol 1420. Springer, Cham. https://doi.org/10.1007/978-3-030-78642-7_51

Lu JL. et al. (2021) Personalized Navigation that Links Speaker’s Ambiguous Descriptions to Indoor Objects for Low Vision People. In: Antona M., Stephanidis C. (eds) Universal Access in Human-Computer Interaction. Access to Media, Learning and Assistive Environments. HCII 2021. Lecture Notes in Computer Science, vol 12769. Springer, Cham. https://doi.org/10.1007/978-3-030-78095-1_30

Hiroyuki Osone, Jun-Li Lu, and Yoichi Ochiai. 2021. BunCho: AI Supported Story Co-Creation via Unsupervised Multitask Learning to Increase Writers’ Creativity in Japanese. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI ’21 Extended Abstracts), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3411763.3450391

佃優河 , 田上大智 , 貞末真明 , 鈴木紫琉 , 盧俊利 , 落合陽一. Calmbots . デジタルコンテンツEXPO , Innovative Technologies 2020 採択技術 IDCAJ会長賞 , 2020.11.14.  https://online.dcexpo.jp/i-tec/

佃優河 , 田上大智 , 貞末真明 , 鈴木紫琉 , 盧俊利 , 落合陽一. Calmbots . デジタルコンテンツEXPO Innovative Technologies 2020 採択技術 , Innovative Technologies 2020 Special Prize -Crazy- , 2020.10.20. https://www.dcexpo.jp/itech2020

「Digital Content EXPO 2020 Online Innovative Technologies 2020」表彰式, https://cgworld.jp/feature/202012-dcexpo.html, CGWORLD.jp, 2020.12.23.

 ゴキブリが絵を描き、モノを運ぶ 群れを遠隔操作する技術「Calmbots」筑波大学が開発, https://www.itmedia.co.jp/news/articles/2012/01/news098.html, IT media, 2020.12.01.

Ученые из Японии научили тараканов работать, https://news.rambler.ru/tech/45286835-uchenye-iz-yaponii-nauchili-tarakanov-rabotat/, Рамблер, 2020.11.22.

【內有圖不喜請左轉】日本團隊打造「半機器蟑螂」,好飼養、易繁殖、還可以搬東西, https://buzzorange.com/techorange/2020/11/19/japan-cyborg-cockroaches/, TechOrange 科技報橘, 2020.11.19.

Calmbots, the Cyborg Cockroaches That Can Perform Small Household Tasks , https://www.autoevolution.com/news/calmbots-the-cyborg-cockroaches-that-can-perform-small-household-tasks-151252.html , autoevolution, 2020.11.09.

Calmbots: así son las cucarachas robot que dibujan líneas y mueven objetos , https://www.lanacion.com.ar/tecnologia/calmbots-asi-son-cucarachas-robot-dibujan-lineas-nid2501988 , LA NACION, 2020.11.08.

Cyborg cockroaches designed to complete tasks inside your HOME can carry objects across the room, transform into a display and draw objects on paper, https://www.dailymail.co.uk/sciencetech/article-8918671/Cyborg-cockroaches-designed-complete-tasks-inside-HOME-carry-objects-room.html, Dairy Mail Online, 2020.11.05.

A Swarm of Cyborg Cockroaches That Lives in Your House, https://spectrum.ieee.org/automaton/robotics/robotics-hardware/swarm-of-cyborg-cockroaches, IEEE Spectrum, 2020.11.04.