جسجتو در بین میلیونها کتاب

دانلود نامحدود

دانلود نامحدود

ساعات پشتیبانی تلفنی

پشتیبانی از ساعت 7 تا 23

ضمانت بازگشت وجه

ضمانت بازگشت وجه

دانلود کتاب Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity

کتاب راهنمای هوش مصنوعی برای موسیقی: مبانی، رویکردهای پیشرفته و تحولات خلاقیت
عنوان فارسی

کتاب راهنمای هوش مصنوعی برای موسیقی: مبانی، رویکردهای پیشرفته و تحولات خلاقیت

عنوان اصلیHandbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity
ناشرSpringer
نویسندهEduardo Reck Miranda
ISBN 3030721159, 9783030721152
سال نشر2021
زبانEnglish
تعداد صفحات1007
دسته موسیقی
فرمت کتابpdf - قابل تبدیل به سایر فرمت ها
حجم فایل27 مگابایت

در صورت ایرانی بودن نویسنده امکان دانلود وجود ندارد و مبلغ عودت داده خواهد شد

وضعیت : موجود

قیمت : 52,000 تومان

دانلود بلافاصله بعد از پرداخت امکان پذیر است

میانگین امتیاز:
از 2 رای

مشاهد کتاب در آمازون
توضیحات فهرست مطالب اطلاعات قبل از خربد

توضیحاتی در مورد کتاب


این کتاب پوشش جامعی از آخرین پیشرفت‌ها در تحقیقات در زمینه توانمندسازی ماشین‌ها برای گوش دادن و ساختن موسیقی جدید ارائه می‌کند. این شامل فصل هایی است که به معرفی آنچه در مورد هوش موسیقایی انسان می دانیم و نحوه شبیه سازی این دانش با هوش مصنوعی می پردازد. توسعه ربات‌های موسیقی تعاملی و رویکردهای نوظهور جدید برای خلاقیت موسیقی مبتنی بر هوش مصنوعی نیز معرفی شده‌اند، از جمله رابط‌های موسیقی مغز و رایانه، پردازشگرهای زیستی و محاسبات کوانتومی.

فناوری هوش مصنوعی (AI) در صنعت موسیقی نفوذ می کند، از سیستم های مدیریت برای استودیوهای ضبط تا سیستم های توصیه برای تجاری سازی آنلاین موسیقی از طریق اینترنت. با این حال، در حالی که هوش مصنوعی برای توزیع موسیقی آنلاین به خوبی پیشرفته است، این کتاب بر روی یک برنامه تا حد زیادی ناشناخته تمرکز دارد:  هوش مصنوعی برای ایجاد محتوای واقعی موسیقی.

فهرست مطالب

Foreword: From Audio Signals to Musical Meaning
References
Preface
Contents
Editor and Contributors
1 Sociocultural and Design Perspectives on AI-Based Music Production: Why Do We Make Music and What Changes if AI Makes It for Us?
1.1 Introduction
1.2 The Philosophical Era
1.3 Creative Cognition and Lofty Versus Lowly Computational Creativity
1.4 The Design Turn
1.4.1 Design Evaluation
1.5 The Sociological View
1.5.1 Cluster Concepts and Emic Versus Etic Definitions
1.5.2 Social Perspectives on the Psychology of Creativity
1.5.3 Social Theories of Taste and Identity
1.5.4 Why Do We Make and Listen to Music?
1.6 Discussion
2 Human–Machine Simultaneity in the Compositional Process
2.1 Introduction
2.2 Machine as Projection Space
2.3 Temporal Interleaving
2.4 Work
2.5 Artistic Research
2.6 Suspension
3 Artificial Intelligence for Music Composition
3.1 Introduction
3.2 Artificial Intelligence and Distributed Human–Computer Co-creativity
3.3 Machine Learning: Applications in Music and Compositional Potential
3.3.1 Digital Musical Instruments
3.3.2 Interactive Music Systems
3.3.3 Computational Aesthetic Evaluation
3.3.4 Human–Computer Co-exploration
3.4 Conceptual Considerations
3.4.1 The Computer as a Compositional Prosthesis
3.4.2 The Computer as a Virtual Player
3.4.3 Artificial Intelligence as a Secondary Agent
3.5 Limitations of Machine Learning
3.6 Composition and AI: The Road Ahead
Acknowledgements
References
4 Artificial Intelligence in Music and Performance: A Subjective Art-Research Inquiry
4.1 Introduction
4.2 Combining Art, Science and Sound Research
4.2.1 Practice-Based Research and Objective Knowledge
4.2.2 Artistic Intervention in Scientific Research
4.3 Machine Learning as a Tool for Musical Performance
4.3.1 Corpus Nil
4.3.2 Scientific and Artistic Drives
4.3.3 Development and Observations
4.4 Artificial Intelligence as Actor in Performance
4.4.1 Humane Methods
4.4.2 Scientific and Artistic Drives
4.4.3 Development and Observations
4.5 Discussion
4.5.1 Artificial Intelligence and Music
4.5.2 From Machine Learning to Artificial Intelligence
4.5.3 Hybrid Methodology
5 Neuroscience of Musical Improvisation
5.1 Introduction
5.2 Cognitive Neuroscience of Music
5.3 Intrinsic Networks of the Brain
5.4 Temporally Precise Indices of Brain Activity in Music
5.5 Attention Toward Moments in Time
5.6 Prediction and Reward
5.7 Music and Language Learning
5.8 Conclusions: Creativity at Multiple Levels
References
6 Discovering the Neuroanatomical Correlates of Music with Machine Learning
6.1 Introduction
6.2 Brain and Statistical Learning Machine
6.2.1 Prediction and Entropy Encoding
6.2.2 Learning
6.2.2.1 Timbre, Phoneme, and Pitch: Distributional Learning
6.2.2.2 Chunk and Word: Transitional Probability
6.2.2.3 Syntax and Grammar: Local Versus Non-local Dependencies
6.2.3 Memory
6.2.3.1 Semantic Versus Episodic
6.2.3.2 Short-Term Versus Long-Term
6.2.3.3 Consolidation
6.2.4 Action and Production
6.2.5 Social Communication
6.3 Computational Model
6.3.1 Mathematical Concepts of the Brain’s Statistical Learning
6.3.2 Statistical Learning and the Neural Network
6.4 Neurobiological Model
6.4.1 Temporal Mechanism
6.4.2 Spatial Mechanism
6.4.2.1 Domain Generality Versus Domain Specificity
6.4.2.2 Probability Encoding
6.4.2.3 Uncertainty Encoding
6.4.2.4 Consolidation of Statistically Learned Knowledge
6.5 Future Direction: Creativity
6.5.1 Optimization for Creativity Rather than Efficiency
6.5.2 Cognitive Architectures
6.5.3 Neuroanatomical Correlates
6.5.3.1 Frontal Lobe
6.5.3.2 Cerebellum
6.5.3.3 Neural Network
6.6 Concluding Remarks
Acknowledgements
References
7 Music, Artificial Intelligence and Neuroscience
7.1 Introduction
7.2 Music
7.3 Artificial Intelligence
7.4 Neuroscience
7.5 Music and Neuroscience
7.6 Artificial Intelligence and Neuroscience
7.7 Music and Artificial Intelligence
7.8 Music, AI, and Neuroscience: A Test
7.9 Concluding Discussion
References
8 Creative Music Neurotechnology
8.1 Introduction
8.2 Sound Synthesis with Real Neuronal Networks
8.3 Raster Plot: Making Music with Spiking Neurones
8.4 Symphony of Minds Listening: Listening to the Listening Mind
8.4.1 Brain Scanning and Analysis
8.4.2 The Compositional Process
8.4.3 The Musical Engine: MusEng
8.4.3.1 Learning Phase
8.4.3.2 Generative Phase
8.4.3.3 Transformative Phase
Pitch Inversion Algorithm
Pitch Scrambling Algorithm
8.5 Brain-Computer Music Interfacing
8.5.1 ICCMR’s First SSVEP-Based BCMI System
8.5.2 Activating Memory and The Paramusical Ensemble
8.6 Concluding Discussion and Acknowledgements
Acknowledgements
Appendix: Two Pages of Raster Plot
References
9 On Making Music with Heartbeats
9.1 Introduction
9.1.1 Why Cardiac Arrhythmias
9.1.2 Why Music Representation
9.1.3 Hearts Driving Music
9.2 Music Notation in Cardiac Auscultation
9.2.1 Venous Hum
9.2.2 Heart Murmurs
9.3 Music Notation of Cardiac Arrhythmias
9.3.1 Premature Ventricular and Atrial Contractions
9.3.2 A Theory of Beethoven and Arrhythmia
9.3.3 Ventricular and Supraventricular Tachycardias
9.3.4 Atrial Fibrillation
9.3.5 Atrial Flutter
9.4 Music Generation from Abnormal Heartbeats
9.4.1 A Retrieval Task
9.4.2 A Matter of Transformation
9.5 Conclusions and Discussion
10 Cognitive Musicology and Artificial Intelligence: Harmonic Analysis, Learning, and Generation
10.1 Introduction
10.2 Classical Artificial Intelligence Versus Deep Learning
10.3 Melodic Harmonization: Symbolic and Subsymbolic Models
10.4 Inventing New Concepts: Conceptual Blending in Harmony
10.5 Conclusions
References
11 On Modelling Harmony with Constraint Programming for Algorithmic Composition Including a Model of Schoenberg\'s Theory of Harmony
11.1 Introduction
11.2 Application Examples
11.2.1 Automatic Melody Harmonisation
11.2.2 Modelling Schoenberg\'s Theory of Harmony
11.2.3 A Compositional Application in Extended Tonality
11.3 Overview: Constraint Programming for Modelling Harmony
11.3.1 Why Constraint Programming for Music Composition?
11.3.2 What Is Constraint Programming?
11.3.3 Music Constraint Systems for Algorithmic Composition
11.3.4 Harmony Modelling
11.3.5 Constraint-Based Harmony Systems
11.4 Case Study: A Constraint-Based Harmony Framework
11.4.1 Declaration of Chord and Scale Types
11.4.2 Temporal Music Representation
11.4.3 Chords and Scales
11.4.4 Notes with Analytical Information
11.4.5 Degrees, Accidentals and Enharmonic Spelling
11.4.6 Efficient Search with Constraint Propagation
11.4.7 Implementation
11.5 An Example: Modelling Schoenberg\'s Theory of Harmony
11.5.1 Score Topology
11.5.2 Pitch Resolution
11.5.3 Chord Types
11.5.4 Part Writing Rules
11.5.5 Simplified Root Progression Directions: Harmonic Band
11.5.6 Chord Inversions
11.5.7 Refined Root Progression Rules
11.5.8 Cadences
11.5.9 Dissonance Treatment
11.5.10 Modulation
11.6 Discussion
11.6.1 Comparison with Previous Systems
11.6.2 Limitations of the Framework
11.6.3 Completeness of Schoenberg Model
11.7 Future Research
11.7.1 Supporting Musical Form with Harmony
11.7.2 Combining Rule-Based Composition with Machine Learning
11.8 Summary
12 Constraint-Solving Systems in Music Creation
12.1 Introduction
12.2 Early Rule Formalizations for Computer-Generated Music
12.3 Improving Your Chances
12.4 Making Room for Exceptions
12.5 The Musical Challenge
12.6 Opening up for Creativity
12.7 The Need for Higher Efficiency
12.8 OMRC - greaterthan  PWMC - greaterthan  ClusterEngine
12.8.1 Musical Potential
12.8.2 Challenging Order
12.8.3 An Efficient User Interface
12.9 Future Developments and Final Remarks
References
13 AI Music Mixing Systems
13.1 Introduction
13.2 Decision-Making Process
13.2.1 Knowledge Encoding
13.2.2 Expert Systems
13.2.3 Data Driven
13.2.4 Decision-Making Summary
13.3 Audio Manipulation
13.3.1 Adaptive Audio Effects
13.3.2 Direct Transformation
13.3.3 Audio Manipulation Summary
13.4 Human-Computer Interaction
13.4.1 Automatic
13.4.2 Independent
13.4.3 Recommendation
13.4.4 Discovery
13.4.5 Control-Level Summary
13.5 Further Design Considerations
13.5.1 Mixing by Sub-grouping
13.5.2 Intelligent Mixing Systems in Context
13.6 Discussion
13.7 The Future of Intelligent Mixing Systems
14 Machine Improvisation in Music: Information-Theoretical Approach
14.1 What Is Machine Improvisation
14.2 How It All Started: Motivation and Theoretical Setting
14.2.1 Part 1: Stochastic Modeling, Prediction, Compression, and Entropy
14.3 Generation of Music Sequences Using Lempel-Ziv (LZ)
14.3.1 Incremental Parsing
14.3.2 Generative Model Based on LZ
14.4 Improved Suffix Search Using Factor Oracle Algorithm
14.5 Lossless Versus Lossy Compression Methods for Machine Improvisation
14.6 Variable Markov Oracle
14.7 Query-Based Improvisation Algorithm
14.7.1 Query-Matching Algorithm
14.8 Part 2: Variational Encoding, Free Energy, and Rate Distortion
14.8.1 Variational Free Energy
14.8.2 Rate Distortion and Human Cognition
14.9 VAE Latent Information Bounds
14.10 Deep Music Information Dynamics
14.10.1 Representation–Prediction Rate Distortion
14.11 Relation to VMO Analysis
14.11.1 Controlling Information Rate Between Encoder and Decoder
14.12 Experimental Results
14.12.1 Experimental Results
14.13 Summary and Discussion
15 Structure, Abstraction and Reference in Artificial Musical Intelligence
15.1 Introduction
15.2 The Nature of Music
15.3 Hierarchy in Music Representation
15.4 Abstraction in Music Representation
15.5 Reference in Music Representation
15.6 Synthesis
16 Folk the Algorithms: (Mis)Applying Artificial Intelligence to Folk Music
16.1 Introduction
16.2 Music Artificial Intelligence and Its Application to Folk Music
16.2.1 1950s–60s
16.2.2 1970s–90s
16.2.3 2000s–10s
16.3 Modeling Folk Music Transcriptions with Long Short-Term Memory Networks
16.3.1 Long Short-Term Memory Networks
16.3.2 folk-rnn (v2)
16.3.3 folk-rnn (v3)
16.3.4 folk-rnn (vBeamSearch)
16.3.5 folk-rnn (vScandinavian)
16.4 Evaluation
16.4.1 Evaluation by Parameter Analysis
16.4.2 Evaluation by Co-creation
16.4.3 Evaluation by Cherry Picking: ``Let\'s Have Another Gan Ainm\'\'
16.5 Ethical Considerations
16.6 Conclusion
17 Automatic Music Composition with Evolutionary Algorithms: Digging into the Roots of Biological Creativity
17.1 Introduction
17.2 Lindenmayer Systems
17.3 Evolutionary Algorithms
17.3.1 Optimization Problems
17.3.2 Evolutionary Algorithms
17.3.3 Indirect Encoding
17.3.4 Evolving L-Systems
17.4 Melomics
17.4.1 Atonal Music
17.4.2 Examples of Atonal Music
17.4.3 Tonal Music
17.4.4 Example of Tonal Music: 0Music and the Web Repository
17.4.5 Example of Application: Music Therapy
17.4.6 Output Formats and Interoperability
17.5 A Soundtrack for Life
17.5.1 Is Artificial Music Actually Music?
17.5.2 Creation or Discovery?
17.5.3 Why Artificial Music?
17.6 Conclusions
18 Assisted Music Creation with Flow Machines: Towards New Categories of New
18.1 Background and Motivations
18.1.1 The Continuator
18.2 Markov Constraints: Main Scientific Results
18.2.1 The ``Markov + X\'\' Roadmap
18.2.2 Positional Constraints
18.2.3 Meter and All that Jazz
18.2.4 Sampling Methods
18.3 Beyond Markov Models
18.4 Flow Composer: The First AI-Assisted Lead Sheet Composition Tool
18.5 Significant Music Productions
18.6 Unfinished but Promising Projects
18.7 Impact and Followup
18.8 Lessons Learned
18.8.1 Better Model Does Not Imply Better Music
18.8.2 New Creative Acts
18.8.3 The Appropriation Effect
18.9 Towards New Categories of New
18.10 Conclusion
19 Performance Creativity in Computer Systems for Expressive Performance of Music
19.1 Introduction
19.1.1 Human Expressive Performance
19.1.2 Computer Expressive Performance
19.1.3 Performance Creativity
19.2 A Generic Framework for Previous Research in Computer Expressive Performance
19.2.1 Modules of Systems Reviewed
19.3 A Survey of Computer Systems for Expressive Music Performance
19.3.1 Non-Learning Systems
19.3.1.1 Director Musices
19.3.1.2 Hierarchical Parabola Model
19.3.1.3 Composer Pulse and Predictive Amplitude Shaping
19.3.1.4 Bach Fugue System
19.3.1.5 Trumpet Synthesis
19.3.1.6 Rubato
19.3.1.7 Pop-E
19.3.1.8 Hermode Tuning
19.3.1.9 Sibelius
19.3.1.10 Computational Music Emotion Rule System
19.3.2 Linear Regression
19.3.2.1 Music Interpretation System
19.3.2.2 CaRo
19.3.3 Artificial Neural Networks
19.3.3.1 Artificial Neural Network Piano System
19.3.3.2 Emotional Flute
19.3.3.3 User-Curated Piano
19.3.4 Case and Instance-Based Systems
19.3.4.1 SaxEx
19.3.4.2 Kagurame
19.3.4.3 Ha-Hi-Hun
19.3.4.4 PLCG System
19.3.4.5 Combined Phrase-Decomposition/PLCG
19.3.4.6 DISTALL System
19.3.5 Statistical Graphical Models
19.3.5.1 Music Plus One
19.3.5.2 ESP Piano System
19.3.6 Other Regression Methods
19.3.6.1 Drumming System
19.3.6.2 KCCA Piano System
19.3.7 Evolutionary Computation
19.3.7.1 Genetic Programming Jazz Sax
19.3.7.2 Sequential Covering Algorithm GAs
19.3.7.3 Jazz Guitar
19.3.7.4 Ossia
19.3.7.5 MASC
19.4 A Detailed Example: IMAP
19.4.1 Evolutionary Computation
19.4.2 IMAP Overview
19.4.2.1 Agent Evaluation Functions
19.4.2.2 Evaluation Equations
19.4.2.3 Agent Function Definitions
19.4.2.4 Agent Cycle
19.4.3 User-Generated Performances of IMAP
19.4.4 Experiments and Evaluation
19.4.4.1 Experiment 1: Can Agents Generate Performances Expressing Their “preference” Weights?
19.4.4.2 Experiment 2: Can One Control the Extent of the Performances’ Diversity?
19.4.4.3 Experiment 3: Controlling the Direction of the Performances’ Diversity
19.4.5 IMAP Summary
19.5 Concluding Remarks
References
20 Imitative Computer-Aided Musical Orchestration with Biologically Inspired Algorithms
20.1 Introduction
20.1.1 Musical Orchestration
20.1.2 Musical Timbre
20.1.3 Musical Orchestration with the Aid of the Computer
20.2 State of the Art
20.2.1 Early Approaches
20.2.2 Generative Approaches
20.2.3 Machine Learning
20.3 Imitative Computer-Aided Musical Orchestration
20.3.1 Overview
20.3.2 Representation
20.3.3 Audio Descriptor Extraction
20.3.4 Pre-processing
20.3.5 Combination Functions
20.3.6 Distance Functions
20.3.7 Calculating the Fitness of Orchestrations
20.4 Computer-Aided Musical Orchestration with Bio-inspired Algorithms
20.4.1 Searching for Orchestrations for a Reference Sound
20.4.2 Finding Orchestrations for a Reference Sound
20.5 Discussion
20.5.1 Perceptual Considerations
20.5.2 Diversity of Orchestrations in CAMO-AIS
20.5.3 Dynamic Orchestrations with Orchids
20.5.4 Dynamic Orchestrations with Orchidea
20.6 Conclusions
21 Human-Centred Artificial Intelligence in Concatenative Sound Synthesis
21.1 Introduction
21.2 Sound Synthesis: A Brief Overview
21.3 How Can Concatenative Sound Synthesis Synthesize Sounds?
21.4 What Affects CSS Result?
21.5 At All Costs
21.6 Human-Centred Artificial Intelligence: That’s not What I Ordered
21.7 Is Similar, Interesting?
21.8 Where Are We Now?
References
22 Deep Generative Models for Musical Audio Synthesis
22.1 Introduction
22.1.1 Overview
22.1.2 Generative Neural Networks
22.1.3 The Gift of Music: DNN-based Synthesizers
22.1.4 Only a Matter of Time: Real-Time Generation
22.1.5 The Answer Lies Within: Interfacing via Conditional Models
22.1.6 Along for the Ride: External Conditioning
22.1.7 Beneath the Surface: Latent Variable Models of Music
22.1.8 Build Me Up, Break Me Down: Audio Synthesis with GANs
22.1.9 A Change of Seasons: Music Translation
22.1.10 Discussion and Conclusion
23 Transfer Learning for Generalized Audio Signal Processing
23.1 Introduction
23.2 Feature Space Adaptation
23.2.1 Echo State Network
23.3 Use Cases
23.3.1 Affective Computing
23.3.2 Bird Species Identification
23.4 Conclusions and Future Directions
24 From Audio to Music Notation
24.1 Introduction
24.2 Problem Definition
24.3 Datasets and Evaluation Metrics
24.3.1 Datasets
24.3.2 Evaluation Metrics
24.4 State of the Art
24.4.1 Overview
24.4.2 Neural Networks for AMT
24.4.3 Multi-task Learning Methods
24.4.4 Music Language Models
24.4.5 Complete Transcription
24.5 Challenges
24.5.1 Datasets
24.5.2 Evaluation Metrics
24.5.3 Non-Western Music
24.5.4 Complete Transcription
24.5.5 Expressive Performance
24.5.6 Domain Adaptation
24.6 Conclusions
25 Automatic Transcription of Polyphonic Vocal Music
25.1 Introduction
25.2 Related Works
25.3 Polyphonic Vocal Music
25.3.1 Particular Characteristics of Vocal Sounds
25.3.2 Probabilistic Latent Component Analysis
25.4 PLCA Applied to Polyphonic Vocal Music
25.4.1 Dictionary Construction
25.4.2 Model 1: MSINGERS
25.4.3 Model 2: VOCAL4
25.4.4 Voice Assignment
25.5 Final Considerations
26 Graph-Based Representation, Analysis, and Interpretation of Popular Music Lyrics Using Semantic Embedding Features
26.1 Introduction
26.2 Key Concepts and Related Works
26.2.1 Deep Modeling of Lexical Meanings
26.2.2 Mapping Artificial Intelligence Research for Lyrics Studies
26.2.3 Believe in Data: Data-Driven Approaches for Lyrics Studies
26.2.4 Critical Re-definition from Empirical to Experimental
26.3 Semantic Word Embedding
26.4 Appending Relational Links
26.4.1 Appending Similarity Links
26.4.2 Appending Lyric Structural Links
26.4.3 Adjacency to the Key Analysis Concepts
26.5 Details of Feature Descriptors
26.5.1 Spatial Distribution Based Features
26.5.1.1 Centroid Location
26.5.1.2 Span Volume and Dispersion Between Semantic Word Embedding Dimensions
26.5.1.3 Maximum Semantic Span
26.5.1.4 Semantic Span Distribution
26.5.1.5 Semantic Span Imbalance Among Semantic Embedding Dimensions
26.5.1.6 Token Distributional Symmetry Based Descriptors
26.5.2 Temporal Structure-Based Features
26.5.2.1 Average Step Size
26.5.2.2 Variation Pattern of the Step Size
26.5.2.3 Average Adjacent Edge Angles
26.5.2.4 Variations of Adjacent Edge Angles
26.5.2.5 Mean of Adjacent Edge Angle Increment
26.5.2.6 Skip Length Descriptors
26.5.2.7 Symmetric Pattern of the Lyric Chain
26.5.3 Feature Descriptor for Graph Topology
26.5.3.1 Average Node Connectedness
26.5.3.2 Variation of Node Connectedness
26.5.3.3 Topological Balance of Node Connectedness
26.5.3.4 Page Rank Descriptor
26.5.3.5 Distribution of Edge Angles
26.5.3.6 Graph Symmetry Descriptors
26.5.3.7 Connection Topological Symmetry Based on Betweenness Centrality
26.5.4 Feature Descriptors on Graph Spectra and Other Analytical Graph Representations
26.5.4.1 Matrix Decomposition Based Descriptors
26.5.4.2 Root Mean Square of Spectra Span Volume
26.6 Empirical Studies
26.6.1 Studies on Distributional Patterns Over Genre Categories
26.6.2 Studies on Distributional Patterns from Different Time Period
26.7 Conclusions and Future Work
References
27 Interactive Machine Learning of Musical Gesture
27.1 Introduction
27.1.1 Why Machine Learning Musical Gestures? Needs and Challenges
27.1.2 Chapter Overview
27.2 Machine-Sensing Gesture
27.2.1 Sensing Movement
27.2.2 Sensing the Body
27.3 Analysing Gesture
27.3.1 Motion Features
27.3.2 EMG Features
27.4 Machine Learning Techniques
27.4.1 Classification
27.4.2 Regression
27.4.3 Temporal Modelling
27.5 Sound Synthesis and Gesture Mapping
27.5.1 Granular Synthesis and Sound Tracing
27.5.2 Corpus-Based Synthesis and Feature Mapping
27.6 Reinforcement Learning
27.6.1 RL for Exploring Gesture-Sound Mappings: Assisted Interactive Machine Learning
27.6.2 AIML System Architecture
27.6.3 AIML Workflow
27.7 In Practice: IML Techniques in Musical Pieces
27.7.1 Wais (Tanaka)
27.7.2 11 Degrees of Dependence (Visi)
27.7.3 Delearning (Tanaka)
27.7.4 ``You Have a New Memory\'\' (Visi)
27.8 Conclusion
28 Human–Robot Musical Interaction
28.1 Introduction
28.2 Music, Interaction, and Robots
28.3 The Waseda Wind Robot Players
28.3.1 The Waseda Flutist WF
28.3.2 The Waseda Anthropomorphic Saxophonist WAS
28.4 Technical Musical Interaction
28.4.1 Asynchronous Verbal Technical Interaction
28.4.2 Synchronous Automatic Interaction
28.4.3 Interaction via Direct Signaling
28.4.4 Multimodal Dynamic Interaction
28.4.5 Technical Interaction in an Orchestra: Conducting Gestures
28.5 Creative Interaction
28.6 Emotional Interaction
28.7 Concluding Discussion
References
29 Shimon Sings-Robotic Musicianship Finds Its Voice
29.1 Introduction—Robotic Musicianship at GTCMT
29.1.1 Platforms
29.1.2 Design Principles
29.2 ``Shimon Sings\'\'—Motivation and Approach
29.3 Lyrics Generation
29.3.1 Implementation
29.3.2 Experiment
29.3.3 Results
29.4 Gesture Generation
29.4.1 Implementation
29.4.2 Experiment Methodology
29.4.3 Results
29.5 Discussion and Future Work
30 AI-Lectronica: Music AI in Clubs and Studio Production
30.1 The Artificial Intelligence Sonic Boom
30.2 Music Production Tools and AI
30.3 AIlgorAIve
30.4 A PersonAl PerspectAve: Shelly Knotts
30.4.1 CYOF
30.4.2 AlgoRIOTmic Grrrl!
30.4.3 Future Work
30.5 I PersonIl PerspectIve: Nick Collins
30.6 Conclusions
References
31 Musicking with Algorithms: Thoughts on Artificial Intelligence, Creativity, and Agency
31.1 Introduction
31.1.1 AI and Art
31.1.2 Motivation
31.1.3 Properties of an Artist
31.1.4 Possibilities with AI in Art and Music
31.1.5 Art in AI
31.2 Agency
31.2.1 Influential Agency
31.2.2 Influential Agency of an Algorithm
31.2.3 Influence as Information
31.2.4 Influential Agency in a Typical AI Music Implementation
31.2.5 Influential Agency in an Actual Example: Ossia
31.2.6 Agency is Where in the Code?
31.3 Tools and Humans
31.3.1 Effort Versus Tool Complexity
31.3.2 Non-mediated Agency in Algorithms
31.4 Spectra of Agency
31.4.1 Spectrum of Tool Complexity
31.4.2 Spectrum of Agency
31.4.3 Spectrum of Generativity
31.5 Problems with Creative AI
31.5.1 The Inherent Non-creativity of Statistical Machine Learning
31.5.2 Opaqueness of AI-Generated Material
31.5.3 The Lack of a Model of the Outside World
31.6 Aesthetics
31.6.1 Autonomous Aesthetics and Agency
31.6.2 Characteristic Inability
31.6.3 Apparent Agency Attribution
31.6.4 Uncanny Valley
31.6.5 Authenticity
31.6.6 Human Measure
31.6.7 Cross-Species Art
31.6.8 The Role of Time—Learning as a Non-Real-time Process
31.6.9 Culture and Forgetting
31.7 Conclusions
31.7.1 Will AI Make Art-Making Easier?
31.7.2 The Road Ahead—Musicking with Algorithms
References
32 cellF: Surrogate Musicianship as a Manifestation of In-Vitro Intelligence
32.1 Introduction
32.2 Origins and Development of the Work
32.3 Influences from the History of Modern Music
32.4 Influences from the Field of Robotic Musicianship
32.5 In-Vitro Intelligence
32.6 Surrogate Musicianship
32.7 Concluding Discussion
References
33 On Growing Computers from Living Biological Cells
33.1 Introduction
33.2 Meet Physarum Polycephalum
33.3 Physarum Polycephalum Sonification
33.4 Developing the Biomemristor
33.4.1 Music Processing with Biomemristors
33.5 Performing Boolean Logic and Arithmetic Operations with the Biomemristor
33.5.1 Bio-Logic Operations
33.5.1.1 The OR Operator
33.5.1.2 The AND Operator
33.5.1.3 The NOT Operator
33.5.2 Towards Bio-Logic Electronic Circuits: Half ADDER
33.6 Concluding Remarks
Acknowledgements
References
34 Quantum Computer: Hello, Music!
34.1 Introduction
34.2 Historical Background
34.3 Algorithmic Computer Music
34.4 Quantum Computing Primer
34.5 Quantum Vocal Synthesizer
34.6 Quantum Walk Sequencer
34.7 Concluding Remarks
Acknowledgements
References

نحوه دریافت کتاب

این کتاب نسخه زبان اصلی است و ترجمه فارسی نیست.بعد از تکمیل فرایند خرید می توانید کتاب را دانلود نمایید. درصورت نیاز به تغییر فرمت کتاب به پشتیبان اطلاع دهید.
کتاب های مرتبط

ورود به حساب کاربری

نام کاربری کلمه عبور

رمز عبور را فراموش کردی؟ کلیک کن

حساب کاربری نداری؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری آدرس ایمیل شماره موبایل کلمه عبور