朝西納普斯前進!

在各動漫、科幻作品中”窺探夢境”並不是什麼很新的想法,
然而想像歸想像,如同”ユメプロジェクター”那樣的道具至今仍是沒有出現;
觀看他人的夢或許還有些困難,但重組他人有意識的視知覺卻已非遙不可及─
はちま起稿看到的報導,柏克萊大學的神經科學研究者Jack Gallant,
藉由fMRI記錄觀看影片時的受試者其腦部血液變化,
並分析每一小塊腦部區域(voxel)的活動與影片內容關係、導出模型後,
以該模型配合1800萬秒的隨機Youtube影片,居然真的成功重組出了一部份受試者所觀賞之影片!

(以下內容引述自Scientists use brain imaging to reveal the movies in our mind)
…..Nishimoto and two other research team members served as subjects for the experiment,
because the procedure requires volunteers to remain still inside the MRI scanner
for hours at a time
…..
…..They watched two separate sets of Hollywood movie trailers,
while fMRI was used to measure blood flow through the visual cortex,
the part of the brain that processes visual information. On the computer,
the brain was divided into small, three-dimensional cubes known as volumetric pixels,
or “voxels.”
…..
…..“We built a model for each voxel that describes how shape and motion information in the movie
is mapped into brain activity,” Nishimoto said
…..
…..The brain activity recorded while subjects viewed the first set of clips was fed into
a computer program that learned, second by second,
to associate visual patterns in the movie with the corresponding brain activity
…..
…..Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm.
This was done by feeding 18 million seconds of random YouTube videos into the computer program
so that it could predict the brain activity that each film clip would most likely evoke in each subject
…..
…..Finally, the 100 clips that the computer program decided were most similar to the clip
that the subject had probably seen were merged to produce a blurry
yet continuous reconstruction of the original movie
…..
…..Reconstructing movies using brain scans has been challenging because the blood flow signals
measured using fMRI change much more slowly than the neural signals that encode
dynamic information in movies, researchers said. For this reason,
most previous attempts to decode brain activity have focused on static images
…..
…..“We addressed this problem by developing a two-stage model that separately describes
the underlying neural population and blood flow signals,” Nishimoto said
…..
…..Ultimately, Nishimoto said, scientists need to understand how the brain processes
dynamic visual events that we experience in everyday life
…..
…..“We need to know how the brain works in naturalistic conditions,” he said.
“For that, we need to first understand how the brain works while we are watching movies.”
…..
…..

重組之影片:

說起來這其實有點像喬瑟夫使用”隱士之紫”來念寫或讀心時,藉電視片段重組內容那樣呢 ww
只不過解析出來的影片效果實在有點詭異,過程也非常的花時間,尤其fMRI的預約時段又很難喬
看來要到達重現夢境、讓我們一窺眾紳士內心世界的日子還有很長的路要走呀…..
 
所以,そはら小姐請放心!繼續讓那無限變態的夢境進化下去吧!(噴氣)

迴響: 5 則迴響

文章分類:科技&研究

標籤:

在〈朝西納普斯前進!〉中有 5 則留言

  1. 似乎一旦牽涉到臉孔辨識,腦區的活化會變得非常專一特化(所以重建度會相當高)。

  2. >>似乎一旦牽涉到臉孔辨識,腦區的活化會變得非常專一特化(所以重建度會相當高)
    但之前也學過,基本上很難將某腦區歸於某一特定功能下,
    因為之間交互的運作實在太頻繁…..目前還沒看過論文,個人對他們分析方式倒滿好奇的.

  3. >因為之間交互的運作實在太頻繁….
    頻繁在全腦觀測下沒差,重點是假設臉孔辨識是一組特化的交互作用機制,只要先認定「是臉」,就會啟動這組辨識;使重建度高於其他物體影像(像動物的很多都很糊)。

  4. >因為之間交互的運作實在太頻繁….
    頻繁在全腦觀測下沒差,重點是假設臉孔辨識是一組特化的交互作用機制,只要先認定「是臉」,就會啟動這組辨識;使重建度高於其他物體影像(像動物的很多都很糊)。

  5. >>頻繁在全腦觀測下沒差,重點是假設臉孔辨識是一組特化的交互作用機制,
    >>只要先認定「是臉」,就會啟動這組辨識;
    >>使重建度高於其他物體影像(像動物的很多都很糊)
    謝謝進一步講解,但就算能較容易的確認對象為臉孔,
    在這邊臉孔內容依然是要靠慢慢分析一塊塊voxel來猜測;
    而我所好奇的是在於如何確定每一塊voxel之狀態與畫面的關聯,
    畢竟這地方是因為該臉孔某特徵,還是因為影片帶來的情緒感受,
    才有所變化並不得而知…..當然從影片的模糊程度來看,
    重點似乎該放在能成功重現主題畫面,而非苛求BD畫質 XD
    突然想到,研究團隊準備了1800萬秒的隨機Youtube影片,
    這些影片的內容分析應該也是大工程一件…..

發表迴響

這個網站採用 Akismet 服務減少垃圾留言。進一步了解 Akismet 如何處理網站訪客的留言資料