|
|
|
|
LEADER |
03720nam a22005055i 4500 |
001 |
978-3-319-61807-4 |
003 |
DE-He213 |
005 |
20170831123154.0 |
007 |
cr nn 008mamaa |
008 |
170831s2017 gw | s |||| 0|eng d |
020 |
|
|
|a 9783319618074
|9 978-3-319-61807-4
|
024 |
7 |
|
|a 10.1007/978-3-319-61807-4
|2 doi
|
040 |
|
|
|d GrThAP
|
050 |
|
4 |
|a RC321-580
|
072 |
|
7 |
|a PSAN
|2 bicssc
|
072 |
|
7 |
|a MED057000
|2 bisacsh
|
082 |
0 |
4 |
|a 612.8
|2 23
|
100 |
1 |
|
|a Shah, Rajiv.
|e author.
|
245 |
1 |
0 |
|a Multimodal Analysis of User-Generated Multimedia Content
|h [electronic resource] /
|c by Rajiv Shah, Roger Zimmermann.
|
264 |
|
1 |
|a Cham :
|b Springer International Publishing :
|b Imprint: Springer,
|c 2017.
|
300 |
|
|
|a XXII, 263 p. 63 illus., 42 illus. in color.
|b online resource.
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
347 |
|
|
|a text file
|b PDF
|2 rda
|
490 |
1 |
|
|a Socio-Affective Computing,
|x 2509-5706 ;
|v 6
|
520 |
|
|
|a This book presents a study of semantics and sentics understanding derived from user-generated multimodal content (UGC). It enables researchers to learn about the ways multimodal analysis of UGC can augment semantics and sentics understanding and it helps in addressing several multimedia analytics problems from social media such as event detection and summarization, tag recommendation and ranking, soundtrack recommendation, lecture video segmentation, and news video uploading. Readers will discover how the derived knowledge structures from multimodal information are beneficial for efficient multimedia search, retrieval, and recommendation. However, real-world UGC is complex, and extracting the semantics and sentics from only multimedia content is very difficult because suitable concepts may be exhibited in different representations. Moreover, due to the increasing popularity of social media websites and advancements in technology, it is now possible to collect a significant amount of important contextual information (e.g., spatial, temporal, and preferential information). Thus, there is a need to analyze the information of UGC from multiple modalities to address these problems. A discussion of multimodal analysis is presented followed by studies on how multimodal information is exploited to address problems that have a significant impact on different areas of society (e.g., entertainment, education, and journalism). Specifically, the methods presented exploit the multimedia content (e.g., visual content) and associated contextual information (e.g., geo-, temporal, and other sensory data). The reader is introduced to several knowledge bases and fusion techniques to address these problems. This work includes future directions for several interesting multimedia analytics problems that have the potential to significantly impact society. The work is aimed at researchers in the multimedia field who would like to pursue research in the area of multimodal analysis of UGC.
|
650 |
|
0 |
|a Medicine.
|
650 |
|
0 |
|a Neurosciences.
|
650 |
|
0 |
|a Data mining.
|
650 |
|
0 |
|a Semantics.
|
650 |
|
0 |
|a Cognitive psychology.
|
650 |
1 |
4 |
|a Biomedicine.
|
650 |
2 |
4 |
|a Neurosciences.
|
650 |
2 |
4 |
|a Data Mining and Knowledge Discovery.
|
650 |
2 |
4 |
|a Semantics.
|
650 |
2 |
4 |
|a Cognitive Psychology.
|
700 |
1 |
|
|a Zimmermann, Roger.
|e author.
|
710 |
2 |
|
|a SpringerLink (Online service)
|
773 |
0 |
|
|t Springer eBooks
|
776 |
0 |
8 |
|i Printed edition:
|z 9783319618067
|
830 |
|
0 |
|a Socio-Affective Computing,
|x 2509-5706 ;
|v 6
|
856 |
4 |
0 |
|u http://dx.doi.org/10.1007/978-3-319-61807-4
|z Full Text via HEAL-Link
|
912 |
|
|
|a ZDB-2-SBL
|
950 |
|
|
|a Biomedical and Life Sciences (Springer-11642)
|