Book

Robin Cook, “Cell”

Cell

Medical Thriller
Putnam Adult
February 4, 2014
Kindle
416
ISBN-10: 0399166300 ISBN-13: 978-0399166303


 

Amazon Review:

The New York Times–bestselling author and master of the medical thriller returns with a top-notch fusion of groundbreaking medical science and edge-of-your-seat suspense.

George Wilson, M.D., a radiology resident in Los Angeles, is about to enter a profession on the brink of an enormous paradigm shift, foreshadowing a vastly different role for doctors everywhere. The smartphone is poised to take on a new role in medicine, no longer as a mere medical app but rather as a fully customizable personal physician capable of diagnosing and treating even better than the real thing. It is called iDoc.

George’s initial collision with this incredible innovation is devastating. He awakens one morning to find his fiancée dead in bed alongside him, not long after she participated in an iDoc beta test. Then several of his patients die after undergoing imaging procedures. All of them had been part of the same beta test.

Is it possible that iDoc is being subverted by hackers—and that the U.S. government is involved in a cover-up? Despite threats to both his career and his freedom, George relentlessly seeks the truth, knowing that if he’s right, the consequences could be lethal.

Visualization

IEEE VIS 2013 – Arts Program (Photos)

I participated in the IEEE Visualization (IEEE VIS) in October, 2013, and presented my work at both paper session and media art exhibition of the VIS Arts Program. Here, I update some photos that I’ve taken at the conference venue (Marriott Marquis, Atlanta, GA) to share with you. You can find the brochure and the descriptions of this program here.

IMG_3200 IMG_3179 IMG_3221 IMG_3219 IMG_3215 IMG_3213 IMG_3210 IMG_3202 IMG_3199 IMG_3196 IMG_3194 IMG_3193

Life

Political sarcasm, The apex of natural language processing.

As online social media evolve with the advance of the Internet in our daily life, people write about many things, targeting different audiences. Perhaps, “talking” would be a better expression than “writing” in this context due to a deluge of political commentaries and conversations, particularly on social media.

 

Many social and computer scientists have been trying to understand what kinds of patterns can be identified and study how computers automatically classify different semantic meanings or sentiment of texts on online social platforms. Although the feasibility of the task is still arguable, there has been some improvement in the quality of prediction.

 

For computer-based predictions or recommendations, machine learning approach takes the core responsibility since its role is the judgment based on a priori knowledge. This knowledge is usually in the form of systemic rule or consistent pattern. For example, computer tokenizes each sentence into individual elements (words) and deduce the most possible context among a few candidates based on grammatical rules. Part of speech (word class or lexical class) tagging is one of the well-established rules in natural language processing.

 

However, in the context of political sarcasm or satire, which is one of the  most common practices in social media, it would be impossible to rely simply on these grammatical rules. Most of the expressions in the context do not mean as it is said. The irony in writing has been studied for a long time, yet this is still considered as an unreliable feature among computer scientists. I think the key obstacle in this study is the unpatterned discrepancy between the superficial meaning of the text and the real semantic underneath the appearance.

 

Cultural dependency as a nature of written sarcasm is another big challenge. This makes the task a multi-dimensional problem that involves many dependent variables. Besides the variables are mostly topic-specific, context-dependent and temporal.

Life

Machine Learning Competition. Play with the historical data set.

Since September 28, 2012, kaggle.com announced a multiple-year long open data-mining project on their website.

This project is dubbed “Titanic: Machine Learning from Disaster” and, so far, 1174 teams have submitted their work to this website.

The data set is all about the passengers of the Titanic. You may discover interesting patterns and implications which can be used for predicting survivors in this historic disaster.

Check out their website!

http://www.kaggle.com/c/titanic-gettingStarted

RMS Titanic, a British passenger liner that sank into the Atlantic Ocean in 1912. (photo excerpt from forbes.com)

CS

Publications

• Research Projects

‘Making Visible the Invisible’: Experimental Data Visualization Project

- Credibility in Online Social Media (Twitter, LinkedIn and Facebook)

- Deconstructing Information Credibility on Twitter

- Discovering Patterns in Online Streams

- Interactive Data Visualization (WiGis) and InfoGraphics : here

- Real Time Hand Pose Recognition with Depth Sensors for Mixed Reality Interfaces

- Discovering novel features for semantic hand gesture tracking and recognition

- Semantic Web and Trust Model for Ontologies

- DynaBook Project (Preliminary Visualization for Data Mining) : here

 

• Media Art Projects and Exhibitions

- ‘TweetProbe: A Real-Time Microblog Stream Visualization Framework’, IEEE VIS 2013 Art Show. (Oct. 13 – 18, 2013)

- Media Art Work ‘Korea-2012′ by George Legrady and Byungkyu Kang has been invited to the exhibition ‘Data Curation’ at Seoul National University Museum of Art. (May – Aug 2013)

‘Making Visible the Invisible’: Experimental Data Visualization Project

 

• Publications

-John O’Donovan, Byungkyu Kang, “Competence Modeling in Twitter: Mapping Theory to Practice”, International Conference on Social Computing (SocialCom 2013). Palo Alto, California., USA. May 27th-29th 2014

-Sujoy Sikdar, Sibel Adali, Md Tanvir Amin, Tarek Abdelzaher, Kevin Chan, Jin-Hee Cho, Byungkyu Kang, John O’Donovan. “Finding True and Credible Information on Twitter”, 17th International Conference of Information Fusion (IEEE FUSION), Salamanca, Spain, July 2014.

-Byungkyu Kang, George Legrady and Tobias Hollerer. “TweetProbe: A Real-Time Microblog Stream Visualization Framework”, In Proceedings of the IEEE VIS Arts Program (VISAP), Atlanta, Georgia, October 2013.

-Sujoy Sikdar, Byungkyu Kang, John O’Donovan, Tobias Hollerer, Sibel Adal, “Understanding Information Credibility on Twitter”, IEEE SocialCom, 2013.

-Sujoy Sikdar, Byungkyu Kang, John O’Donovan, Tobias Hollerer, Sibel Adal, “Cutting Through the Noise: Defining Ground Truth in Information Credibility on Twitter”, ASE HUMAN JOURNAL 2.1 (2013)

- Byungkyu Kang, Mathieu Rodrigue, Tobias Hollerer and Hwasup Lim, “Poster: Real Time Hand Pose Recognition with Depth Sensors for Mixed Reality Interfaces”, IEEE Symposium on 3D User Interfaces(3DUI), 2013.

-James Schaffer, Byungkyu Kang, Tobias Hollerer, Hengchang Liu and John O’Donovan, “Interactive Interfaces for Complex Network Analysis: A QoI Perspective”, IEEE International Conference on Pervasive Computing and Communications(PERCOM), 2013

-[Jul 2012] Paper at IEEE SocialCom 2012: Full paper on “Credibility in Context: An Analysis of Feature Distributions in Twitter”has been accepted to IEEE SocialCom 2012 in Amsterdam, The Netherlands. by John O’Donovan Byungkyu Kang, Greg Meyer, Tobias Höllerer and Sibel Adali. (9.8% acceptance rate)

-Byungkyu Kang, John O.Donovan and Tobias Hollerer, Modeling Topic Specific Credibility in Twitter, International Conference on Intelligent User Interfaces(IUI), 2012

-Byungkyu Kang, John O’Donovan and Tobias Hollerer, A Framework for Modeling Trust in Collaborative Ontologies. Proceedings of the sixth Graduate Student Workshop on Computing, 2011, UC Santa Barbara, 39-40.

 

• Posters

- A Framework for Modeling Trust in Collaborative Ontologies, GSWC at UCSB, 2011  : here

Life

Fuzzy Text Classification

If your work involves frequent text processing to yield quantitative information such as statistics out of plain text data, you might want to consider ‘Machine Learning’ based approaches.

In this case, Fuzzy Text Classification might contribute pretty much to your current mission to be accomplished.

This simple but efficient algorithm have been taken by a few natural language processing tasks.

http://www0.cs.ucl.ac.uk/staff/a.hunter/tradepress/fuzzy.html

Life

IUI2012

The International Conference on  Intelligent User Interfaces 2012 (IUI2012) was held in Lisboa, Portugal(Feb 13 – 17). As I expected of this conference, many experienced attendants said that it was maintained with small body and comparatively less number of people. However, the extend of fields covered by the seven different sessions were quite wide enough(or even wider than other giant conferences) so I could understand and experience a number of other research areas as well.

I presented a full paper about ‘Modeling Topic Specific Credibility in Microblog’ at the last day of the conference.

20120220-152231.jpg

Life

What’s Big Data?

The concept of Big Data

 

(1) A definition of Big Data provided by the McKinsey

http://www.mckinsey.com/insights/mgi/research/technology_and_innovation/big_data_the_next_frontier_for_innovation

 

(2) Another definition of Big Data, which had been dubbed a month later by the IDC following the above one.

http://idcdocserv.com/1142

 

A screenshot of the Trendsmap.com service.

A screenshot of the Trendsmap.com service.

Computer Vision

A new CMOS Sensor Takes Range, RGB Images at Same Time

[ISSCC] Samsung’s CMOS Sensor Takes Range, RGB Images at Same Time

Feb 25, 2012 14:18
Tomonori Shindou, Nikkei Electronics

A range image taken by the sensor

Samsung Electronics Co Ltd developed what it claims is the world’s first CMOS sensor that can obtain a range image and a normal RGB (red, green and blue) image at the same time.

The sensor was announced at ISSCC 2012, which took place from Feb 19 to 23, 2012, in the US (thesis number 22.7).

As a method of obtaining a range image, the sensor uses the ToF (time-of-flight) method, which is commonly used. In the past, Samsung Advanced Institute of Technology (SAIT) announced a technology to integrate pixels for obtaining range images (Z pixels) and RGB pixels on one image sensor. But, due to limitations related to near-infrared filter, etc, it cannot simultaneously obtain a range image and an RGB image in a strict sense. It is just an output in a time-sharing manner.

Range image sensors are drawing attention because of the success of Microsoft Corp’s “Kinect” gesture-based controller. However, the Kinect is equipped with a range image sensor using the “structured light method.” And an image sensor for RGB images is required in addition to the range image sensor. Also, with a stereo method, there need to be two cameras for a parallax.

With the new technology, a normal RGB image and a range image can be obtained at the same time by using a single image sensor, enabling to reduce the sizes of gesture-based controllers, etc. Also, the technology might make it easy to add a range image measurement function to digital cameras, camcorders, etc so that they can recognize gestures.

Article continued here (http://techon.nikkeibp.co.jp/english/NEWS_EN/20120225/206010/)

Augmented Reality

Adaptive Interfaces for Marker-based Augmented Reality

CS290I Mixed and Augmented Reality [2012 Winter]

Project Report. 

March 21, 2012
[Individual Project]  Byungkyu (Jay) Kang

Overview of the Project

Augmented reality(AR) techniques have been developed for more than a decade in a variety of fields including Computer Science and its possible application practices are being exponentially increased.  As augmented reality technology is being developed rapidly, we are not only facing its increased demand in the real world, but being asked for more advanced interfaces as well as smooth adaptation to merge the technology with real world.  In this project, we propose a few interfaces that can be utilized in both legacy platform and mobile environment.  We suggest four different user interfaces (recognition, reflection, sound and motion interactions) for experiencing more realistic augmented reality application.

Main Feature

The contribution of this work is harnessing four different simple but effective means for user interface in augmented reality. In this section, brief description of each interface is introduced. Detailed specification and implementation methodology of each technique will be discussed in the following section.

  1. Nested Marker for extensible distance
    For the first user interface, we exploit nested architecture of a single marker.  Nested marker can provide longer distance by having larger marker print.  Our implementation also provides seamless transition between two different layers in a nested marker.
  2. Light source responsive object rendering
    We have been experiencing unrealistic composition of both rendered image and real image frame. For example, crispy 3D or 2D object on a blurry camera image and bright augmented object in a dark surrounding are the most common unrealistic compositions. We provide a lighting mechanism interacting with real background light sources in OpenGL rendering loop.
  3. Sound manipulator
    There are a number of possible interfaces between human and computer.  Even a conveniently enough mobile device can not be reached sometimes.  We can interact with AR application/objects by making a specific sounds such as clapping or snapping of fingers.
  4. Mask based object interaction
    Direct interaction in real space with augmented objects through our body(i.e. finger tip) is perhaps the most effective and intuitive interface in virtual or augmented reality.  In this project, we firstly detect markers and find their vertices of each corner.  By making use of them, we are able to generate four different regions of interest(ROIs) and assign specific effect such as rotation or re-scaling.

Implementation and Evaluation

1. System Design

The implemented application for this project is originally intended for mobile environment because we can exploit more sensing modalities(i.e. gyro sensor, compass, etc.).  Due to time constraint, however, implementation on mobile device(iOS) is incomplete yet, and now in progress. (Image input/output, basic user interface(storyboard environment for iOS 4 and preliminary image processing are implemented.)  We will update upcoming progress as well as its final result on this post immediately.

1.1. System Overview

- Platform : Open Platform (Developed on MacOSX 10.7.2 – XCode)
- Used Libraries : OpenCV, OpenGL, ARToolkit 2.72.1, Portaudio

1.2. System Architecture

1.2.1. Feature Matching

Not only ARToolkit but most of augmented reality frameworks and libraries follow the basic feature matching algorithm. This algorithm is as follows.

Figure 1. Feature Matching in ARToolkit

1.2.2. System Architecture

Our system has mainly three tracks between both each raw frame and final rendering.  Nested Marker can be individually trained by ARToolkit.  We generated marker pattern files by the flash online marker generator [2] and added smooth scaler which is calculated from detected marker size.

Figure 2. System Architecture

2. Proposed Method

2.1. Nested Marker

Figure 3. Prototype of nested marker

Since a nested marker has more than four markers of the same image, we can extend recognition distance as much as necessary as shown in Figure 3.  However, we must have bigger marker print for longer distance to recognize, and this is one of the drawbacks of our interface.  The problem statement of our approach is physical limitation of recognition distance, particularly outdoor augmented reality.  Similar technique was introduced by [1] Tateno et al., but our goal purely focuses on providing smooth and seamless extension of recognition in longer distance.  [1] Tateno et al. intended to provide various user experiences with multi-layered nested marker by having different sub-images.  This previous objective now can be achieved by incorporating multiple renderings with different offsets in an application.  Smooth-scaling is done by detecting the area size of recognized marker in real-time.  We can rescale rendered object by multiplying coefficient derived from detected maker size.  When nested sub-images are not aligned in center, we can also assign offset value as well.

ARToolkit documentation also explicitly mentions about the limitation of the distance of recognition(range issues). Figure 3.1 is the table about range issue from the ARToolkit documentation and Figure 3.2 describes our nested-marker model.

Figure 3.1. Tracking range for different sized patterns (from ARToolkit Documentation)

Figure 3.2. Nested marker model

2.2. Light source responsive object rendering

Figure 4. Flowchart of the algorithm of light aware rendering technique

As can be seen in Figure 2., the light source aware rendering algorithm utilizes real world camera input with appropriate image processing.  A similar approach has been made in [3] Lie et al. which utilizes the spatial and temporal coherence of illumination from real camera input image.  Although their algorithm is comparatively robust, it shows nearly real-time performance with optimized configuration according to the paper.  Our algorithm shows us real-time rendering using OpenGL without having optimized condition(approximation of obtained contours from light source detection).  The real-time performance is especially crucial for mobile augmented reality applications.  By employing light source aware rendering in any augmented reality application, a user of AR system can experience realism from rendered object responding to surrounding light condition.  The algorithm of the approach is described in Figure 4 as below.

Figure 5. Light Source Extraction - Upper two images are from OpenGL and OpenCV output, bottom left is for Lab image with highlight of detected light sources. Bottom right image is found contours from cvFindContour function.

Figure 6. Fast Fourier Transform visualization of the sound of snapping fingers fed as input interaction

2.3. Sound manipulator

One of two additional approaches we intended to add in this project was sound based interaction with augmented realistic objects.  This approach can be very simple but also powerful unless an user is interacting with a system in a very noisy public space.  Since each sound has its unique feature in its frequency distribution, the system can recognize certain frequency against average amplitude of the whole input sound.  Preliminary implementation has been done with Processing.  For example, the unique distribution in fast fourier transform(FFT) visualization was found as can be seen in Figure 6.  Since we need to merge this functionality with our system, C based sound processing library must be used instead of Processing which is Java based.  Portaudio library is chosen for audio processing layer in order to analyze microphone input.  Implementation of this module is still in progress.

 

2.4. Mask based object interaction

This interaction approach has been made from a number of literatures previously.  Therefore, we can briefly mention about this approach with the following Figure 7.  We can firstly find the orientation of detected markers from ‘marker_info‘ pointer variable in ARToolkit library.  After up/down and left/right positions of a marker is found, four regions of interest with half size of the marker area is going to be assigned to each edge of a marker.  For each region, we can link to specific event which will be activated once any motion flow is detected in the corresponding ROI.

Figure 7. A systemic description of marker interaction with motion flow detection

Figure 8. Finding four distinctive corners of detected marker

 

 Demonstration

demo_nestedmarker

demo_light

[ REFERENCES]

[1] Hirokazu Kato , Mark Billinghurst, Marker Tracking and HMD Calibration for a Video-Based Augmented Reality Conferencing System, Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality, p.85, October 20-21, 1999

[2] ARToolKit Marker Generator Online
http://flash.tarotaro.org/blog/2008/12/14/artoolkit-marker-generator-online-released/

[3] Yanli Liu, Xavier Granier: Online Tracking of Outdoor Lighting Variations for Augmented Reality with Moving Cameras. IEEE Trans. Vis. Comput. Graph. 18(4): 573-580 (2012)

[4] The skeleton code is brought to this project by SimpleTest.c in ARToolkit 2.72.1 open sourcecode. http://www.hitl.washington.edu/artoolkit/download/

[5] OpenCV 2.0 Documentation (OpenCV 2.0 C Reference) http://opencv.willowgarage.com/documentation/