Sunteți pe pagina 1din 165

iOS7 Day by Day

a review of iOS7 for developers, in 24 bite-sized chunks


Sam Davies
This book is for sale at http://leanpub.com/ios7daybyday
This version was published on 2013-11-05

This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing process. Lean
Publishing is the act of publishing an in-progress ebook using lightweight tools and many iterations to get
reader feedback, pivot until you have the right book and build traction once you do.
2013 Scott Logic

Tweet This Book!


Please help Sam Davies by spreading the word about this book on Twitter!
The suggested hashtag for this book is #iOS7DayByDay.
Find out what other people are saying about the book by clicking on this link to search for this hashtag on
Twitter:
https://twitter.com/search?q=#iOS7DayByDay

Contents
Preface . . . .
Audience .
Book layout
Source code

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

1
1
1
2

Day 0: UIKit Dynamics .


The physical universe .
Building a pendulum .
Conclusion . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

3
3
4
8

Day 1: NSURLSession . .
Simple download . . .
Tracking progress . . .
Canceling a download
Resumable download .
Background download
Summary . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

9
9
11
12
12
13
15

Day 2: Asset Catalog


Introduction . . . .
Custom imagesets .
Conclusion . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

16
16
17
19

Day 3: Background Fetch . .


Introduction . . . . . . . .
Enabling background fetch
Implementation . . . . . .
Testing . . . . . . . . . . .
Conclusion . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

20
20
20
21
23
24

Day4: Speech Synthesis with AVSpeechSynthesizer


Introduction . . . . . . . . . . . . . . . . . . . . .
Voices . . . . . . . . . . . . . . . . . . . . . . . .
Utterances . . . . . . . . . . . . . . . . . . . . . .
Implementation . . . . . . . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

25
25
25
26
27
27

.
.
.
.

.
.
.
.

CONTENTS

Day 5: UIDynamics and Collection Views


Building a Carousel . . . . . . . . . . . .
Adding springs . . . . . . . . . . . . . .
Inserting items . . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

28
28
30
38
42

Day 6: TintColor . . . . . . . . . . .
Tint color of existing iOS controls
Tint Dimming . . . . . . . . . . .
Using tint color in custom views .
Tinting images with tintColor . .
Conclusion . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

43
43
44
45
47
49

Day 7: Taking Snapshots of UIViews


Introduction . . . . . . . . . . . . .
Snapshotting for Animation . . . .
Pre/post View Updates . . . . . . .
Snapshotting to an image . . . . . .
Limitations . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

50
50
50
54
56
58
59

Day 8: Reading list with SafariServices


Introduction . . . . . . . . . . . . . .
Usage . . . . . . . . . . . . . . . . .
Sample project . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

60
60
60
61
61

Day 9: Device Identification


Introduction . . . . . . . .
Vendor Identification . . .
Advertising Identification
Network Identification . .
Conclusion . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

62
62
62
62
62
63

Day 10: Custom UIViewController Transitions


Navigation Controller Delegate . . . . . . . .
Creating a custom transition . . . . . . . . . .
Summary . . . . . . . . . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

64
64
65
67

Day 11: UIView Key-Frame Animations


Introduction . . . . . . . . . . . . . . .
Rainbow Changer . . . . . . . . . . . .
Keyframe animation options . . . . . .
Rotation Directions . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

68
68
68
70
72
74

Day 12: Dynamic Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

75

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

CONTENTS

Introduction . . .
Dynamic Type .
Font Descriptors
Conclusion . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

75
75
78
80

Day 13: Route Directions with MapKit


Introduction . . . . . . . . . . . . . .
Requesting Directions . . . . . . . .
Directions Response . . . . . . . . .
Rendering a Polyline . . . . . . . . .
Route steps . . . . . . . . . . . . . .
Building RouteMaster . . . . . . . . .
Conclusion . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

81
81
81
83
84
86
87
89

Day 14: Interactive View Controller Transitions


Introduction . . . . . . . . . . . . . . . . . . .
Flip Transition Animation . . . . . . . . . . .
Interactive transitioning . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

. 90
. 90
. 90
. 94
. 100

Day 15: CoreImage Filters


Introduction . . . . . . .
Photo Effect Filters . . .
QR Code Generation . .
Conclusion . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

101
101
101
103
106

Day 16: Decoding QR Codes with AVFoundation


Introduction . . . . . . . . . . . . . . . . . . . .
AVFoundation pipeline . . . . . . . . . . . . . .
Capturing metadata . . . . . . . . . . . . . . . .
Drawing the code outline . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

107
107
107
109
112
117

Day 17: iBeacons .


Introduction . .
Create a beacon
Beacon Ranging
Conclusion . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

118
118
118
120
122

Day 18: Detecting Face Features with CoreImage


Introduction . . . . . . . . . . . . . . . . . . . .
Face detection with AVFoundation . . . . . . .
Feature finding with CoreImage . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

123
123
123
125
127

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

Day 19: UITableView Row Height Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

CONTENTS

Without estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128


With estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Day 20: View controller content and navigation bars
Introduction . . . . . . . . . . . . . . . . . . . . . .
iOS7 View Controller Changes: The theory . . . . .
In Practice . . . . . . . . . . . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

135
135
135
135
139

Day 21: Multi-column TextKit text rendering


Introduction . . . . . . . . . . . . . . . . . .
TextKit . . . . . . . . . . . . . . . . . . . . .
Multiple Columns . . . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

140
140
140
140
143

Day 22: Downloadable Fonts


Introduction . . . . . . . .
Listing available fonts . .
Downloading a font . . . .
Conclusion . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

144
144
144
148
151

Day 23: Multipeer Connectivity


Introduction . . . . . . . . . .
Browsing for devices . . . . .
Advertising availability . . . .
Sending Data . . . . . . . . .
Conclusion . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

152
152
152
154
155
157

Afterword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Useful Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

Preface
Welcome along to iOS7 Day-by-Day! In September of 2013 Apple released the 7th version of their exceedingly
popular mobile operating system into the world. With it came a new user interface appearance, new icons
and lots of other little changes for users to complain about. However, the most exciting changes were, as
ever, in the underlying APIs - with new frameworks and considerable new functionality added to existing
frameworks.
There are so many changes in fact that its very difficult for a busy developer to pore through the release
notes to discover the features which they can take advantage of. Therefore I wrote and published a daily blog
series, in which each article discussed a new feature, and created a sample app to demonstrate it.
This series was very successful, and ran for a total of 24 days - covering many parts of the new operating
system, including both the big headline frameworks and also the somewhat smaller hidden gems. The only
notable omissions are the game-related frameworks, such as SpriteKit and changes to GameCenter. This
is, unapologetically, because I have little experience of games, and also felt that these were being covered
extensively elsewhere.
This book represents the sum-total of the blog series - each chapter represents a different post in the dayby-day series, with only minor changes. The original posts are still available online, and may offer some
additional information in the form of comments.
If you have any comments or corrections for the book then do let me know - Im @iwantmyrealname on
twitter.

Audience
Each chapter in this book is about a feature which was introduced in iOS7, and therefore is primarily targeted
at developers who have had some experience of building iOS apps. Having said that, non-developers familiar
with iOS might be interested in reading the new features available.
If you are new to iOS development its probably worth reading through some of the introductory material
available elsewhere - e.g. the excellent tutorials available on raywenderlich.com.

Book layout
This book is a collection of daily blog posts, which on the most-part stand alone. There are one or two which
cross-reference each other, but they can be read entirely independently.
The chapters arent meant to be complete tutorials, and as such, the code snippets within each chapter usually
just highlight the more salient bits of code associated with a particular step. However, each chapter has an
accompanying working app, the source code for which can be found on GitHub.
http://www.shinobicontrols.com/blog/posts/2013/09/19/introducing-ios7-day-by-day/
https://twitter.com/iwantmyrealname
http://www.raywenderlich.com

Preface

Source code
The GitHub repository at github.com/ShinobiControls/ios7-day-by-day contains projects which accompany
each chapter, organized by day number.
The projects are all built using Xcode 5, and should run straight after downloading. Any pull-requests for
fixes and improvements will be greatly appreciated!
https://github.com/ShinobiControls/ios7-day-by-day

Day 0: UIKit Dynamics


With the introduction of iOS7 Apple made it very clear that they are pushing the interaction between devices
and the real world. One of the new APIs they introduced was UIKit Dynamics - a 2-dimensional physics
engine which lies underneath the entirety of UIKit. In day 0 of this blog series were going to take a look at
UIKit Dynamics and build a Newtons cradle simulation.

The physical universe


In order to model the physics of real world we use UIDynamicBehavior subclasses to apply different behaviors
to objects which adopt the UIDynamicItem protocol. Examples of behaviors include concepts such as gravity,
collisions and springs. Although you can create your own objects which adopt the UIDynamicItem protocol,
importantly UIView already does this. The UIDynamicBehavior objects can be composited together to generate
a behavior object which contains all the behavior for a given object or set of objects.
Once we have specified the behaviors for our dynamic objects we can provide them to a UIDynamicAnimator
instance - the physics engine itself. This runs the calculations to determine how the different objects should
interact given their behaviors. The follows shows a conceptual overview of the UIKit Dynamics world:

Day 0: UIKit Dynamics

UIKit Dynamics Conceptual Overview

Building a pendulum
Remembering back to high school science - one of the simplest objects studied in Newtonian physics is a
pendulum. Lets create a UIView to represent the ball-bearing:
1
2
3
4
5
6
7

UIView *ballBearing = [[UIView alloc] initWithFrame:CGRectMake(0,0,40,40)];


ballBearing.backgroundColor = [UIColor lightGrayColor];
ballBearing.layer.cornerRadius = 10;
ballBearing.layer.borderColor = [UIColor grayColor].CGColor;
ballBearing.layer.borderWidth = 2;
ballBearing.center = CGPointMake(200, 300);
[self.view addSubview:ballBearing];

Now we can add some behaviors to this ball bearing. Well create a composite behavior to collect the behavior
together:
1

UIDynamicBehavior *behavior = [[UIDynamicBehavior alloc] init];

Next well start adding the behaviors we wish to model - first up gravity:

Day 0: UIKit Dynamics

1
2
3

UIGravityBehavior *gravity = [[UIGravityBehavior alloc] initWithItems:@[ballBearing]];


gravity.magnitude = 10;
[behavior addChildBehavior:gravity];
UIGravityBehavior represents the gravitational attraction between an object and the Earth. It has properties

which allow you to configure the vector of the gravitational force (i.e. both magnitude and direction). Here
we are increasing the magnitude of the force, but keeping it directed in an increasing y direction.
The other behavior we need to apply to our ball bearing is an attachment behavior - which represents the
string from which it hangs:
1
2
3
4
5

CGPoint anchor = ballBearing.center;


anchor.y -= 200;
UIAttachmentBehavior *attachment = [[UIAttachmentBehavior alloc]
initWithItem:ballBearing attachedToAnchor:anchor];
[behavior addChildBehavior:attachment];
UIAttachmentBehavior instances attach dynamic objects either to an anchor point or to another object.

They have properties which control the behavior of the attaching string - specifying its frequent, damping
and length. The default values for this ensure a completely rigid attachment, which is what we want for a
pendulum.
Now the behaviors are specified on the ball bearing we can create the physics engine to look after it all, which
is defined as an ivar UIDynamicAnimator *_animator;:
1
2

_animator = [[UIDynamicAnimator alloc] initWithReferenceView:self.view];


[_animator addBehavior:behavior];
UIDynamicAnimator represents the physics engine which is required to model the dynamic system. Here we

create it and specify which view it should use as its reference view (i.e. specifying the spatial universe) and
add the composite behavior weve built.
With that weve actually created our first UIKit Dynamics system. However, if you run up the app, nothing
will happen. This is because the system starts in and equilibrium state - we need to perturb the system to see
some motion.

Gesture responsive behaviors


We need to add a gesture recognizer to the ball bearing to allow the user to play with the pendulum:
1
2
3

UIPanGestureRecognizer *gesture = [[UIPanGestureRecognizer alloc] initWithTarget:self


action:@selector(handleBallBearingPan:)];
[ballBearing addGestureRecognizer:gesture];

In the target for the gesture recognizer we apply a constant force behavior to the ball bearing:

Day 0: UIKit Dynamics

1
2
3
4
5
6
7
8
9
10
11

- (void)handleBallBearingPan:(UIPanGestureRecognizer *)recognizer
{
// If we're starting the gesture then create a drag force
if (recognizer.state == UIGestureRecognizerStateBegan) {
if(_userDragBehavior) {
[_animator removeBehavior:_userDragBehavior];
}
_userDragBehavior = [[UIPushBehavior alloc] initWithItems:@[recognizer.view]
mode:UIPushBehaviorModeContinuous];
[_animator addBehavior:_userDragBehavior];
}

12

// Set the force to be proportional to distance the gesture has moved


_userDragBehavior.pushDirection =
CGVectorMake([recognizer translationInView:self].x / 10.f, 0);

13
14
15
16
17

// If we're finishing then cancel the behavior to 'let-go' of the ball


if (recognizer.state == UIGestureRecognizerStateEnded) {
[_animator removeBehavior:_userDragBehavior];
_userDragBehavior = nil;
}

18
19
20
21
22
23

}
UIPushBehavior represents a simple linear force applied to objects. We use the callback to apply a force to the
ball bearing, which displaces it. We have an ivar UIPushBehavior *_userDragBehavior which we create when

a gesture start, remembering to add it to the dynamics animator. We set the size of the force to be proportional
to the horizontal displacement. In order for the pendulum to swing we remove the push behavior when the
gesture has ended.

Combining multiple pendulums


A Newtons cradle is an arrangement of identical pendulums, such that the ball bearings are almost touching.

Day 0: UIKit Dynamics

Newtons Cradle

To recreate this using UIKit Dynamics we need to create multiple pendulums - following the same pattern
for each of them as we did above. They should be spaced so that they arent quite touching (see the sample
code for details).
We also need to add a new behavior which will describe how the ball bearings collide with each other. We
now have an ivar to store the ball bearings NSArray *_ballBearings;:
1
2
3

UICollisionBehavior *collision = [[UICollisionBehavior alloc]


initWithObjects:_ballBearings];
[behavior addChildBehavior:collision];

Here were using a collision behavior and a set of objects which are modeled in the system. Collision behaviors
can also be used to model objects hitting boundaries such as view boundaries, or arbitrary bezier path
boundaries.
If you run the app now and try to move one of the pendulums youll notice that the cradle doesnt behave as
you would expect it to. This is because the collisions are currently not elastic. We need to add a special type
of dynamic behavior to specify various shared properties:
1
2
3
4
5
6
7

UIDynamicItemBehavior *itemBehavior = [[UIDynamicItemBehavior alloc]


initWithItems:_ballBearings];
// Elasticity governs the efficiency of the collisions
itemBehavior.elasticity = 1.0;
itemBehavior.allowsRotation = NO;
itemBehavior.resistance = 2.0;
[behavior addChildBehavior:itemBehavior];

We use UIDynamicItemBehavior to specify the elasticity of the collisions, along with some other properties
such as resistance (pretty much air resistance) and rotation. If we allow rotation we can specify the angular
resistance. The dynamic item behavior also allows setting of linear and angular velocity which can be useful
when matching velocities with gestures.

Day 0: UIKit Dynamics

Running the app up now will show a Newtons cradle which behaves exactly as you would expect it in the
real world. Maybe as an extension you could investigate drawing the strings of the pendulums as well as the
ball bearings.

Completed UIDynamics Newtons Cradle

The code which accompanies this post represents the completed Newtons cradle project. It uses all the
elements introduced, but just tidies them up a little into a demo app.

Conclusion
This introduction to UIKit Dynamics has barely scratched the surface - with these building blocks really
complex physical systems can be modeled. This opens the door for creating apps which are heavily influenced
by our inherent understanding of motion and object interactions from the real world.

Day 1: NSURLSession
In the past networking for iOS was performed using NSURLConnection which used the global state to manage
cookies and authentication. Therefore it was possible to have 2 different connections competing with each
other for shared settings. NSURLSession sets out to solve this problem and a host of others as well.
The project which accompanies this guide includes the three different download scenarios discussed forthwith. This post wont describe the entire project - just the salient parts associated with the new NSURLSession
API.

Simple download
NSURLSession represents the entire state associated with multiple connections, which was formerly a shared

global state. Session objects are created with a factory method which takes a configuration object. There are
3 types of possible sessions:
1. Default, in-process session
2. Ephemeral (in-memory), in-process session
3. Background session
For a simple download well just use a default session:
1
2

NSURLSessionConfiguration *sessionConfig =
[NSURLSessionConfiguration defaultSessionConfiguration];

Once a configuration object has been created there are properties on it which control the way it behaves.
For example, its possible to set acceptable levels of TLS security, whether cookies are allowed and timeouts.
Two of the more interesting properties are allowsCellularAccess and discretionary. The former specifies
whether a device is permitted to run the networking session when only a cellular radio is available. Setting
a session as discretionary enables the operating system to schedule the network access to sensible times i.e. when a WiFi network is available, and when the device has good power. This is primarily of use for
background sessions, and as such defaults to true for a background session.
Once we have a session configuration object we can create the session itself:
1
2
3
4

NSURLSession *inProcessSession;
inProcessSession = [NSURLSession sessionWithConfiguration:sessionConfig
delegate:self
delegateQueue:nil];

Day 1: NSURLSession

10

Note here that were also setting ourselves as a delegate. Delegate methods are used to notify us of the
progress of data transfers and to request information when challenged for authentication. Well implement
some appropriate methods soon.
Data transfers are encapsulated in tasks - of which there are three types:
1. Data task (NSURLSessionDataTask)
2. Upload task (NSURLSessionUploadTask)
3. Download task (NSURLSessionDownloadTask)
In order to perform a transfer within the session we need to create a task. For a simple file download:
1
2

NSString *url = @"http://appropriate/url/here";


NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:url]];

3
4
5
6

NSURLSessionDownloadTask *cancellableTask =
[inProcessSession downloadTaskWithRequest:request];
[cancellableTask resume];

Thats all there is to it - the session will now asynchronously attempt to download the file at the specified
URL.
In order to get hold of the requested file download we need to implement a delegate method:
1
2
3
4
5
6

- (void)URLSession:(NSURLSession *)session
downloadTask:(NSURLSessionDownloadTask *)downloadTask
didFinishDownloadingToURL:(NSURL *)location
{
// We've successfully finished the download. Let's save the file
NSFileManager *fileManager = [NSFileManager defaultManager];

7
8
9
10

NSArray *URLs = [fileManager URLsForDirectory:NSDocumentDirectory inDomains:NSUserDoma\


inMask];
NSURL *documentsDirectory = URLs[0];

11
12
13
14

NSURL *destinationPath = [documentsDirectory URLByAppendingPathComponent:


[location lastPathComponent]];
NSError *error;

15
16
17
18
19
20
21

// Make sure we overwrite anything that's already there


[fileManager removeItemAtURL:destinationPath error:NULL];
BOOL success = [fileManager copyItemAtURL:location
toURL:destinationPath
error:&error];

Day 1: NSURLSession

11

if (success)
{
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *image = [UIImage imageWithContentsOfFile:[destinationPath path]];
self.imageView.image = image;
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
self.imageView.hidden = NO;
});
}
else
{
NSLog(@"Couldn't copy the downloaded file");
}

22
23
24
25
26
27
28
29
30
31
32
33
34
35

if(downloadTask == cancellableTask) {
cancellableTask = nil;
}

36
37
38
39

This method is defined on NSURLSessionDownloadTaskDelegate. We get passed the temporary location of the
downloaded file, so in this code were saving it off to the documents directory and then (since we have a
picture) displaying it to the user.
The above delegate method only gets called if the download task succeeds. The following method is
on NSURLSessionDelegate and gets called after every task finishes, irrespective of whether it completes
successfully:
1
2
3
4
5
6
7
8

- (void)URLSession:(NSURLSession *)session
task:(NSURLSessionTask *)task
didCompleteWithError:(NSError *)error
{
dispatch_async(dispatch_get_main_queue(), ^{
self.progressIndicator.hidden = YES;
});
}

If the error object is nil then the task completed without a problem. Otherwise its possible to query it to
find out what the problem was. If a partial download has been completed then the error object contains a
reference to an NSData object which can be used to resume the transfer at a later stage.

Tracking progress
Youll have noticed that we hid a progress indicator as part of the task completion method at the end of the last
section. Updating the progress of this progress bar couldnt be easier. There is an additional delegate method
which is called zero or more times during in the tasks lifetime:

Day 1: NSURLSession

1
2
3
4
5
6
7
8
9
10
11
12

12

- (void)URLSession:(NSURLSession *)session
downloadTask:(NSURLSessionDownloadTask *)downloadTask
didWriteData:(int64_t)bytesWritten
BytesWritten:(int64_t)totalBytesWritten
totalBytesExpectedToWrite:(int64_t)totalBytesExpectedToWrite
{
double currentProgress = totalBytesWritten / (double)totalBytesExpectedToWrite;
dispatch_async(dispatch_get_main_queue(), ^{
self.progressIndicator.hidden = NO;
self.progressIndicator.progress = currentProgress;
});
}

This is another method which is part of the NSURLSessionDownloadTaskDelegate, and we use it here to
estimate the progress and update the progress indicator.

Canceling a download
Once an NSURLConnection had been sent off it was impossible to cancel it. This is different with an easy ability
to cancel the an NSURLSessionTask:
1
2
3
4
5
6

- (IBAction)cancelCancellable:(id)sender {
if(cancellableTask) {
[cancellableTask cancel];
cancellableTask = nil;
}
}

Its as easy as that! Its worth noting that the URLSession:task:didCompleteWithError: delegate method will
be called once a task has been canceled to enable you to update the UI appropriately. Its quite possible that after canceling a task the URLSession:downloadTask:didWriteData:BytesWritten:totalBytesExpectedToWrite:
method might be called again, however, the didComplete method will definitely be last.

Resumable download
Its also possible to resume a download pretty easily. There is an alternative cancel method which provides
an NSData object which can be used to create a new task to continue the transfer at a later stage. If the server
supports resuming downloads then the data object will include the bytes already downloaded:

Day 1: NSURLSession

1
2
3
4
5
6
7
8

13

- (IBAction)cancelCancellable:(id)sender {
if(self.resumableTask) {
[self.resumableTask cancelByProducingResumeData:^(NSData *resumeData) {
partialDownload = resumeData;
self.resumableTask = nil;
}];
}
}

Here weve popped the resume data into an ivar which we can later use to resume the download.
When creating the download task, rather than supplying a request you can provide a resume data object:
1
2
3
4
5
6
7
8
9
10
11

if(!self.resumableTask) {
if(partialDownload) {
self.resumableTask = [inProcessSession
downloadTaskWithResumeData:partialDownload];
} else {
NSString *url = @"http://url/for/image";
NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:url]];
self.resumableTask = [inProcessSession downloadTaskWithRequest:request];
}
[self.resumableTask resume];
}

If weve got a partialDownload object then we create the task using that, otherwise we create the task as we
did before.
The only other thing to remember here is that we need to set partialDownload = nil; when the process
ends.

Background download
The other major feature that NSURLSession introduces is the ability to continue data transfers even when your
app isnt running. In order to do this we configure a session to be a background session:
1
2
3
4
5
6
7
8
9

- (NSURLSession *)backgroundSession
{
static NSURLSession *backgroundSession = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
NSString *confStr = @"com.shinobicontrols.BackgroundDownload.BackgroundSession"
NSURLSessionConfiguration *config = [NSURLSessionConfiguration
backgroundSessionConfiguration:confStr];
backgroundSession = [NSURLSession sessionWithConfiguration:config

14

Day 1: NSURLSession

delegate:self
delegateQueue:nil];

10
11

});
return backgroundSession;

12
13
14

Its important to note that we can only create one session with a given background token, hence the dispatch
once block. The purpose of the token is to allow us to collect the session once our app is restarted. Creating a
background session starts up a background transfer daemon which will manage the data transfer for us. This
will continue to run even when the app has been suspended or terminated.
Starting a background download task is exactly the same as we did before - all of the background
functionality is managed by the NSURLSession we have just created:
1
2
3
4

NSString *url = @"http://url/for/picture";


NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:url]];
self.backgroundTask = [self.backgroundSession downloadTaskWithRequest:request];
[self.backgroundTask resume];

Now, even when you press the home button to leave the app, the download will continue in the background
(subject to the configuration options mentioned at the start).
When the download is completed then iOS will restart your app to let it know - and to pass it the payload. To
do this it calls the following method on your app delegate:
1
2
3
4
5
6

- (void)application:(UIApplication *)application
handleEventsForBackgroundURLSession:(NSString *)identifier
completionHandler:(void (^)())completionHandler
{
self.backgroundURLSessionCompletionHandler = completionHandler;
}

Here we get passed a completion handler, which once weve accepted the downloaded data and updated our
UI appropriately, we should call. Here were saving off the completion handler (remembering that blocks
have to be copied), and letting the loading of the view controller manage the data handling. When the view
controller is loaded it creates the background session (which sets the delegate) and therefore the same delegate
methods we were using before are called.

Day 1: NSURLSession

1
2
3
4
5
6

15

- (void)URLSession:(NSURLSession *)session
downloadTask:(NSURLSessionDownloadTask *)downloadTask
didFinishDownloadingToURL:(NSURL *)location
{
// Save the file off as before, and set it as an image view
//...

if (session == self.backgroundSession) {
self.backgroundTask = nil;
// Get hold of the app delegate
SCAppDelegate *appDelegate =
(SCAppDelegate *)[[UIApplication sharedApplication] delegate];
if(appDelegate.backgroundURLSessionCompletionHandler) {
// Need to copy the completion handler
void (^handler)() = appDelegate.backgroundURLSessionCompletionHandler;
appDelegate.backgroundURLSessionCompletionHandler = nil;
handler();
}
}

8
9
10
11
12
13
14
15
16
17
18
19
20

There are a few things to note here:


We cant compare downloadTask to self.backgroundTask. This is because we cant guarantee that
self.backgroundTask has been populated since this could be a new launch of the app. Comparing the
session is valid though.
Here we grab hold of the app delegate. There are other ways of passing the completion handler to the
right place.
Once weve finished saving the file and displaying it we make sure that if we have a completion handler,
we remove it, and then invoke it. This tells the operating system that weve finished handling the new
download.

Summary
NSURLSession provides a lot of new invaluable features for dealing with networking in iOS (and OSX 10.9)

and replaces the old way of doing things. Its worth getting to grips with it and using it for all apps that can
be targetted at the new operating systems.

Day 2: Asset Catalog


Introduction
We have all spent time fiddling with organizing image assets in Xcode projects in the past - never sure whether
weve got the retina versions of all the image, or whether weve got all the different icon versions we need.
In the past this has been a disjoint process at best, but with Xcode 5 and iOS 7 Apple have introduced a new
concept in Asset Catalogs which organize both the physical image files and the contextual information about
them. An asset catalog comprises a collection of image sets, app icons and launch screens and is created within
Xcode. When creating a new project in Xcode 5 an asset catalog will be created called Images, and will be
prepared for holding app icons and launch screens. Xcode provides a facility to migrate old apps to using asset
catalogs.
In iOS 7 the catalogs are compiled into an optimized binary format for release to reduce the size of the
completed app.
Asset catalogs are a directory on disk which is managed by Xcode. It is structured in a particular way, and
includes a json file to store the meta-data associated with the catalog:

Asset catalog directory structure

App icons and launch images


The asset catalog auto-created by Xcode is called Images.xcassets and contains entries for AppIcon and
LaunchImage. Each of these has fields appropriate for the deployment target of your project, and includes the
sizes required:

17

Day 2: Asset Catalog

AppIcon selection

Simply dragging images from the finder into the asset catalog manager in finder will bring the image into the
asset catalog. If you have provided an incorrectly sized image this will raise a warning in Xcode:

AppIcon incorrect size

Custom imagesets
As well as the standard collections, you can use asset catalogs to manage your own images. Images are
contained within an ImageSet, with a reference for both retina and non-retina versions of the same image.

Custom image set

18

Day 2: Asset Catalog

Creating an image set is done within Xcode, and you can organize image sets within folders for ease of
navigation. Using the images stored inside an asset catalog is as simple as using UIImage:imageNamed::
1

UIImage *image = [UIImage imageNamed:@"Australia"];

Slicing images
The other major feature of asset catalogs is the ability to do image slicing. Creating images which are resizable in this manner has been available since iOS 2, but this new feature in Xcode makes it really simple to
do.
Resizing images using slicing is a common technique for creating visual elements such as buttons - where
the center of the image should be stretched or tiled to the new size, and the edges should be stretched in one
direction only and the corners should remain the same size.
Slicing is available on an ImageSet within the asset catalog - enabled by clicking the Show Slicing button.
You can choose horizontal, vertical or both for scaling direction. Your image will then be overlaid with guides
which mark the fixed endpoints, and size of the re-sizable central section:

Slice ImageSet

Using these sliced images is really easy - simply create a UIImage as before, and then when you resize the
UIImageView used to display it, the image will rescale as per the slicing.

19

Day 2: Asset Catalog

UIImage *btnImage = [UIImage imageNamed:@"ButtonSlice"];

2
3
4
5
6
7

// Let's make 2
UIImageView *iv = [[UIImageView alloc] initWithImage:btnImage];
iv.bounds = CGRectMake(0, 0, 150, CGRectGetHeight(iv.bounds));
iv.center = CGPointMake(CGRectGetWidth(self.view.bounds) / 2, 300);
[self.view addSubview:iv];

8
9
10
11
12
13

// And a stretched version


iv = [[UIImageView alloc] initWithImage:btnImage];
iv.bounds = CGRectMake(0, 0, 300, CGRectGetHeight(iv.bounds));
iv.center = CGPointMake(CGRectGetWidth(self.view.bounds) / 2, 350);
[self.view addSubview:iv];

Sliced result

Conclusion
Asset catalogs arent a ground-breaking addition to the iOS developers toolkit, but they really do take some
of the pain out of the fiddly aspects of development. They come as enabled for new projects with Xcode 5,
and will make asset management a much less arduous task.

Day 3: Background Fetch


Introduction
iOS7 introduces a few new multi-tasking APIs - weve already seen the data transfer daemon offered by
NSURLSession which allows file downloads to be continued when the app is in the background. Another new
feature is the background fetch API, which allows an app to get updated content even when it isnt running.
This enables your app to have up to date content the second a user opens it, rather than having to wait for
the update to be delivered over the network. iOS intelligently schedules the background fetch events based
on your app usage and to save battery life - e.g. it might notice that a user checks their social network every
morning when they wake up, and therefore schedule a fetch just before.

Enabling background fetch


An app has to register that it wishes to use background fetch, which with the new capabilities tab in Xcode 5
is really easy to do:

Background fetch capabilities

The other thing that you need to do is specify how often you would like to be woken up to perform a
background fetch. If you know that your data is only going to be updated every hour, then thats information
that the iOS fetch scheduler can use. If you arent sure then you can use the recommended value:

Day 3: Background Fetch

1
2
3
4
5
6

21

- (BOOL)application:(UIApplication *)application
didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
// Set the fetch interval so that it is actually called
[[UIApplication sharedApplication]
setMinimumBackgroundFetchInterval:UIApplicationBackgroundFetchIntervalMinimum];

return YES;

8
9

The default value for minimumBackgroundFetchInterval is UIApplicationBackgroundFetchIntervalNever,


and therefore this value needs to be set so that your app is called.

Implementation
When a background fetch occurs, iOS starts the app and then makes a call to the application delegate method
application: performFetchWithCompletionHandler:. The app then has a certain amount of time to perform
the fetch and call the completion handler block it has been provided.
The project which accompanies this article is a traffic status app - which has simulates receiving notifications
about traffic conditions on roads and then displaying them in a UITableView. In this demo, the updates are
randomly generated - and this can be seen from pulling the table to refresh, which has the following method
as its target:
1
2
3
4
5
6

- (void)refreshStatus:(id)sender
{
[self createNewStatusUpdatesWithMin:0 max:3 completionBlock:^{
[refreshControl endRefreshing];
}];
}

This calls a utility method createNewStatusUpdatesWithMin:max:completionBlock::


1
2
3
4
5
6

- (NSUInteger)createNewStatusUpdatesWithMin:(NSUInteger)min
max:(NSUInteger)max
completionBlock:(SCTrafficStatusCreationComplete)compHandler
{
NSUInteger numberToCreate = arc4random_uniform(max-min) + min;
NSMutableArray *indexPathsToUpdate = [NSMutableArray array];

7
8
9
10
11

for(int i=0; i<numberToCreate; i++) {


[self.trafficStatusUpdates insertObject:[SCTrafficStatus randomStatus] atIndex:0];
[indexPathsToUpdate addObject:[NSIndexPath indexPathForRow:i inSection:0]];
}

Day 3: Background Fetch

22

12

[self.tableView insertRowsAtIndexPaths:indexPathsToUpdate
withRowAnimation:UITableViewRowAnimationFade];
if(compHandler) {
compHandler();
}

13
14
15
16
17
18

return numberToCreate;

19
20

In this we create a random number of random updates (using the randomStatus method on SCTrafficStatus,
which, as its name suggests, generates a random status object). We then update our backing data store, refresh
the table and call the completion handler. This is all standard UITableView code, and this is where you can
slot in the code which actually updates your datastore from the network.
In order to add the facility to create updates using background fetch, we add a method to the API of our view
controller:
1
2
3
4
5
6
7
8
9
10
11
12
13
14

- (NSUInteger)insertStatusObjectsForFetchWithCompletionHandler:
(void (^)(UIBackgroundFetchResult))completionHandler
{
NSUInteger numberCreated = [self createNewStatusUpdatesWithMin:0
max:3
completionBlock:NULL];
NSLog(@"Background fetch completed - %d new updates", numberCreated);
UIBackgroundFetchResult result = UIBackgroundFetchResultNoData;
if(numberCreated > 0) {
result = UIBackgroundFetchResultNewData;
}
completionHandler(result);
return numberCreated;
}

This method takes a completion handler of the form used by the app delegate background fetch method - so
we can use this later on. First were creating some new updates, using the method we described before. The
completion handler needs to be informed whether the update worked, and if it did, whether new data was
delivered. We establish this using the return value of our create method, and then call the completion handler
with the appropriate result.
This completion handler is used to tell iOS that were done and that, if appropriate, were ready to have our
snapshot taken to update the display in the app launcher.
Finally, we need to link this up with the app delegate method:

23

Day 3: Background Fetch

1
2
3
4
5
6
7
8
9
10

- (void)application:(UIApplication *)application
performFetchWithCompletionHandler:(void (^)(UIBackgroundFetchResult))completionHandler
{
// Get hold of the view controller
SCViewController *vc = (SCViewController *)self.window.rootViewController;
// Insert status updates and pass in the completion handler block
NSUInteger numberInserted =
[vc insertStatusObjectsForFetchWithCompletionHandler:completionHandler];
[UIApplication sharedApplication].applicationIconBadgeNumber += numberInserted;
}

Now, when the app is woken up for a background fetch, it will call through to the view controller, and perform
the update. Refreshingly simple.

Testing
So far, we havent tested any of this code, and its not immediately obvious how to simulate background fetch
events. Xcode 5 has this sorted, but before we dive in we need to consider 2 cases:
1. App currently running in background
The user has started the app and has left it to do something else, but the app is continuing to run in the
background (i.e. it hasnt been terminated). Xcode provides a new debugging method to simulate this, so
testing is as simple as running up the app, pressing the home button and then invoking the new debug method:

Debug fetch request

Whilst debugging its a good idea to have some logging in your fetch update methods to observe the fetch
event taking place. In the sample app, this will update the apps badge on the home screen.
1. App currently in terminated state

24

Day 3: Background Fetch

The app has run before, but was terminated, either by the user or by iOS. The easiest way to simulate this
is to add a new scheme to Xcode. Click manage schemes from the scheme drop down in Xcode, and then
duplicate the existing scheme. Editing the new scheme then update the run task with the option to launch as
a background fetch process:

Enable fetch for launch

Now, when you run this scheme youll see the simulator start up, but your app wont be lauched. If youve got
some logging in the background fetch delegate method then youll see that output. See the attached project
for an example of this.

Conclusion
Background fetch offers the opportunity to enhance the user experience of your app for a small amount of
effort. If your app relies on data updates from the internet, then this is a really simple way to ensure that your
user always has the latest information when the app launches.

Day4: Speech Synthesis with


AVSpeechSynthesizer
Introduction
iOS has had speech synthesis as part of siri since iOS 5, but it was never exposed as functionality accessible
via a public API. iOS 7 changes this, with a simple API - AVSpeechSynthesizer.

Voices
iOS 7 contains a set of different voices which can be used for speech synthesis. You can use these to specify
the language and variant you wish to synthesize. AVSpeechSynthesisVoice:speechVoices returns an array
of the available voices:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

2013-07-12 10:49:26.929 GreetingSpeaker[31267:70b] (


"[AVSpeechSynthesisVoice 0x978a0b0] Language: th-TH",
"[AVSpeechSynthesisVoice 0x977a450] Language: pt-BR",
"[AVSpeechSynthesisVoice 0x977a480] Language: sk-SK",
"[AVSpeechSynthesisVoice 0x978ad50] Language: fr-CA",
"[AVSpeechSynthesisVoice 0x978ada0] Language: ro-RO",
"[AVSpeechSynthesisVoice 0x97823f0] Language: no-NO",
"[AVSpeechSynthesisVoice 0x978e7b0] Language: fi-FI",
"[AVSpeechSynthesisVoice 0x978af50] Language: pl-PL",
"[AVSpeechSynthesisVoice 0x978afa0] Language: de-DE",
"[AVSpeechSynthesisVoice 0x978e390] Language: nl-NL",
"[AVSpeechSynthesisVoice 0x978b030] Language: id-ID",
"[AVSpeechSynthesisVoice 0x978b080] Language: tr-TR",
"[AVSpeechSynthesisVoice 0x978b0d0] Language: it-IT",
"[AVSpeechSynthesisVoice 0x978b120] Language: pt-PT",
"[AVSpeechSynthesisVoice 0x978b170] Language: fr-FR",
"[AVSpeechSynthesisVoice 0x978b1c0] Language: ru-RU",
"[AVSpeechSynthesisVoice 0x978b210] Language: es-MX",
"[AVSpeechSynthesisVoice 0x978b2d0] Language: zh-HK",
"[AVSpeechSynthesisVoice 0x978b320] Language: sv-SE",
"[AVSpeechSynthesisVoice 0x978b010] Language: hu-HU",
"[AVSpeechSynthesisVoice 0x978b440] Language: zh-TW",
"[AVSpeechSynthesisVoice 0x978b490] Language: es-ES",
"[AVSpeechSynthesisVoice 0x978b4e0] Language: zh-CN",
"[AVSpeechSynthesisVoice 0x978b530] Language: nl-BE",
"[AVSpeechSynthesisVoice 0x978b580] Language: en-GB",

26

Day4: Speech Synthesis with AVSpeechSynthesizer

"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice

27
28
29
30
31
32
33
34
35
36
37
38

0x978b5d0]
0x978b620]
0x978b670]
0x978b6c0]
0x978aed0]
0x978af20]
0x978b810]
0x978b860]
0x978b8b0]
0x978b900]
0x978b950]

Language:
Language:
Language:
Language:
Language:
Language:
Language:
Language:
Language:
Language:
Language:

ar-SA",
ko-KR",
cs-CZ",
en-ZA",
en-AU",
da-DK",
en-US",
en-IE",
hi-IN",
el-GR",
ja-JP"

You create a specific voice with the following class method:


1

AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-US"];

If the language isnt recognized then the return value will be nil.

Utterances
An utterance represents a section of speech - a collection of which can be passed to the speech synthesizer
to create a stream of speech. An utterance is created with the string which will be spoken by the speech
synthesizer:
1
2

AVSpeechUtterance *utterance =
[AVSpeechUtterance speechUtteranceWithString:@"Hello world!"];

We can specify the voice for an utterance with the voice property:
1

utterence.voice = voice;

There are other properties which can be set on an utterance, including rate, volume and pitchMultiplier.
For example, to slow down the speech a little:
1

utterance.rate *= 0.7;

Once an utterance has been created it can be passed to a speech synthesizer object which will cause the audio
to be generated:

Day4: Speech Synthesis with AVSpeechSynthesizer

1
2

27

AVSpeechSynthesizer *speechSynthesizer = [[AVSpeechSynthesizer alloc] init];


[speechSynthesizer speakUtterance:utterance];

Utterances are queued by the synthesizer, so you can continue to pass utterances without waiting the speech
to be completed. If you attempt to pass an utterance instance which is currently in the queue then an exception
will be thrown.

Implementation
The sample project which accompanies this article is a multi-lingual greeting app. This demonstrates the
versatility of the speech synthesis functionality present in iOS 7.
Its important to note that the strings which define the utterances are all specified in the roman alphabet - e.g.
Ni hao in Chinese. The sample project defines a class which creates utterances for a set of languages.
The project has a picker to allow the user to choose a language and then a button to hear the greeting spoken
in the appropriate language.

Conclusion
Speech synthesis has been made really simple in iOS 7, with a wide range of languages. Used sensibly it has
potential for improving accessibility and enabling hands/eyes-free operation of apps.

Day 5: UIDynamics and Collection Views


Back at the beginning of this series, day 0 took a look at the new UIKit Dynamics physics engine, and used
it to build a Newtons Cradle. Although this was a lot of fun, and served as a good introduction to UIKit
Dynamics, its not particularly obvious how this can be useful when building apps. Todays DbD looks at
how to link the physics engine with UICollectionViews - resulting in some subtle effects noticeable as the
user interacts with the collection.
The demo project which accompanies todays post is a horizontal springy carousel, where the individual
items are attached to springs. We will also show how to use the dynamics engine to animate the newly added
cells.

Building a Carousel
In order to demonstrate using the physics engine with a collection view, we firstly need to make a carousel out
of a UICollectionView. This post isnt a tutorial on how to use UICollectionView, so Ill skip briefly through
this part. Well make the view controller the datasource and delegate for the collection view, and implement
the methods we need:
1
2
3
4
5

#pragma mark - UICollectionViewDataSource methods


- (NSInteger)numberOfSectionsInCollectionView:(UICollectionView *)collectionView
{
return 1;
}

6
7
8
9
10
11

- (NSInteger)collectionView:(UICollectionView *)collectionView
numberOfItemsInSection:(NSInteger)section
{
return [_collectionViewCellContent count];
}

12
13
14
15
16
17
18
19
20
21
22
23

- (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView


cellForItemAtIndexPath:(NSIndexPath *)indexPath
{
SCCollectionViewSampleCell *cell =
(SCCollectionViewSampleCell *)[self.collectionView
dequeueReusableCellWithReuseIdentifier:@"SpringyCell"
forIndexPath:indexPath];
cell.numberLabel.text = [NSString stringWithFormat:@"%d",
[_collectionViewCellContent[indexPath.row] integerValue]];
return cell;
}

Day 5: UIDynamics and Collection Views

29

24
25
26
27
28
29
30
31

#pragma mark - UICollectionViewDelegate methods


- (CGSize)collectionView:(UICollectionView *)collectionView
layout:(UICollectionViewLayout *)collectionViewLayout
sizeForItemAtIndexPath:(NSIndexPath *)indexPath
{
return itemSize;
}

The cells are each square tiles which contain a number inside a UILabel. The numbers of the cells we
are currently displaying in the collection view are stored inside an array (_colletionViewCellContent) as
NSNumber objects. We do this to preserve the ordering of the cells - not important at this stage, but will be
once we work out how to insert new cells.
In order to get the collection view to appear as a horizontal carousel we need to provide a custom layout. As
is often the case, the flow layout has a lot of what we need, so well subclass that:
1
2
3

@interface SCSpringyCarousel : UICollectionViewFlowLayout


- (instancetype)initWithItemSize:(CGSize)size;
@end

In order to force all of the items into a horizontal carousel at the bottom of the view we need to know the
item height - hence the constructor which requires an item size. We override the prepareLayout method to
set the content inset to push the items to the bottom of the collection view:
1
2
3
4
5
6
7
8

- (void)prepareLayout
{
// We update the section inset before we layout
self.sectionInset = UIEdgeInsetsMake(
CGRectGetHeight(self.collectionView.bounds) - _itemSize.height,
0, 0, 0);
[super prepareLayout];
}

Setting this as the layout on the collection view will create the horizontal carousel were after.

30

Day 5: UIDynamics and Collection Views

1
2
3

- (void)viewDidLoad
{
[super viewDidLoad];

...
// Provide the layout
_collectionViewLayout = [[SCSpringyCarousel alloc] initWithItemSize:itemSize];
self.collectionView.collectionViewLayout = _collectionViewLayout;

5
6
7
8
9

Non-springy carousel

Adding springs
Now on to the more exciting stuff - lets fix this up with the UIKit Dynamics physics engine.
The physical model were going to use has each item connected to the position it would have been fixed to in
a vanilla flow layout - i.e. the we take the items from the carousel weve already made, and attach the them
to their positions with springs. Then, as we scroll the view, the springs will stretch and well get the effect
we want. Well, nearly, we need to perturb the springs a distance proportional to the distance from the touch
point, but well come to that when the time is right.
Translating this model into a UIDynamics concept is as follows: - When we are preparing the layout we
request the positioning information from the flow layout super class. - We add appropriate behaviors to these
positioning objects to allow them to be animated in the physics world. - These behaviors and position objects
are passed to the animator so that the simulation can run. - The methods on the UICollectionViewLayout
are overridden to return the positions from the animator, instead of the flow layout superclass.
This all sounds a lot more complicated than it actually is - honestly! Lets work through it in stages.

Day 5: UIDynamics and Collection Views

31

Behavior Manager
In order to keep the code nice and tidy, well create a class which manages the dynamic behaviors inside the
animator. Its API should look like the following:
1
2
3
4
5

@interface SCItemBehaviorManager : NSObject


@property (readonly, strong) UIGravityBehavior *gravityBehavior;
@property (readonly, strong) UICollisionBehavior *collisionBehavior;
@property (readonly, strong) NSDictionary *attachmentBehaviors;
@property (readonly, strong) UIDynamicAnimator *animator;

6
7

- (instancetype)initWithAnimator:(UIDynamicAnimator *)animator;

8
9
10
11
12
13

- (void)addItem:(UICollectionViewLayoutAttributes *)item anchor:(CGPoint)anchor;


- (void)removeItemAtIndexPath:(NSIndexPath *)indexPath;
- (void)updateItemCollection:(NSArray*)items;
- (NSArray *)currentlyManagedItemIndexPaths;
@end

The behavior of each of our cells is constructed from shared UIGravityBehavior and UICollisionBehavior
objects and an individual UIAttachmentBehvaior. We create our behavior manager with a UIDynamicAnimator
and expose methods for adding, removing items, as well as a method to update the collection to match an
array.
When we create a manager object then we want to create the shared behaviors, and attach them to the
animator:
1
2
3
4
5
6
7
8
9
10
11
12
13
14

- (instancetype)initWithAnimator:(UIDynamicAnimator *)animator
{
self = [super init];
if(self) {
_animator = animator;
_attachmentBehaviors = [NSMutableDictionary dictionary];
[self createGravityBehavior];
[self createCollisionBehavior];
// Add the global behaviors to the animator
[self.animator addBehavior:self.gravityBehavior];
[self.animator addBehavior:self.collisionBehavior];
}
return self;
}

with the 2 utility methods called here being very simple, and having similar composition to what we used for
the Newtons Cradle project back on day 0:

Day 5: UIDynamics and Collection Views

1
2
3
4
5

32

- (void)createGravityBehavior
{
_gravityBehavior = [[UIGravityBehavior alloc] init];
_gravityBehavior.magnitude = 0.3;
}

6
7
8
9
10
11
12
13
14
15
16
17

- (void)createCollisionBehavior
{
_collisionBehavior = [[UICollisionBehavior alloc] init];
_collisionBehavior.collisionMode = UICollisionBehaviorModeBoundaries;
_collisionBehavior.translatesReferenceBoundsIntoBoundary = YES;
// Need to add item behavior specific to this
UIDynamicItemBehavior *itemBehavior = [[UIDynamicItemBehavior alloc] init];
itemBehavior.elasticity = 1;
// Add it as a child behavior
[_collisionBehavior addChildBehavior:itemBehavior];
}

Youll notice that we dont add any dynamic items to the behaviors at this stage - principally because
we dont actually have any yet. The collision behavior isnt going to be used for collisions between the
individual cells, but instead within the boundary of the collection view. Hence the setting the two properties:
collisionMode and translatesReferenceBoundsIntoBoundary. We also add a UIDynamicItemBehavior to
specify the elasticity of the collisions, in the same way that we did with the pendula.
Now we have created these global behaviors we need to implement the addItem: and removeItem: methods.
The add method will add the new item to the global behaviors and also set up the spring which attaches the
cell to the background canvas:
1
2
3
4
5
6
7
8

- (void)addItem:(UICollectionViewLayoutAttributes *)item anchor:(CGPoint)anchor


{
UIAttachmentBehavior *attachmentBehavior = [self createAttachmentBehaviorForItem:item \
anchor:anchor];
// Add the behavior to the animator
[self.animator addBehavior:attachmentBehavior];
// And store it in the dictionary. Keyed by the index path
[_attachmentBehaviors setObject:attachmentBehavior forKey:item.indexPath];

// Also need to add this item to the global behaviors


[self.gravityBehavior addItem:item];
[self.collisionBehavior addItem:item];

10
11
12
13

The spring behavior is created using a utility method:

Day 5: UIDynamics and Collection Views

1
2
3
4
5
6
7
8
9
10
11

33

- (UIAttachmentBehavior *)createAttachmentBehaviorForItem:(id<UIDynamicItem>)item
anchor:(CGPoint)anchor
{
UIAttachmentBehavior *attachmentBehavior = [[UIAttachmentBehavior alloc]
initWithItem:item
attachedToAnchor:anchor];
attachmentBehavior.damping = 0.5;
attachmentBehavior.frequency = 0.8;
attachmentBehavior.length = 0;
return attachmentBehavior;
}

We also store the attachment behavior in a dictionary, keyed by the NSIndexPath. This will allow us to work
out which spring we need to remove when we implement the remove method.
Once we created the attachment behavior we add it to the animator, and add the provided item to the shared
gravity and collision behaviors.
The remove method performs exactly the opposite operation - remove the attachment behavior from the
animator and the item from the shared gravity and collision behaviors:
1
2
3
4
5

- (void)removeItemAtIndexPath:(NSIndexPath *)indexPath
{
// Remove the attachment behavior from the animator
UIAttachmentBehavior *attachmentBehavior = self.attachmentBehaviors[indexPath];
[self.animator removeBehavior:attachmentBehavior];

6
7

// Remove the item from the global behaviors


for(UICollectionViewLayoutAttributes *attr in [self.gravityBehavior.items copy])
{
if([attr.indexPath isEqual:indexPath]) {
[self.gravityBehavior removeItem:attr];
}
}
for (UICollectionViewLayoutAttributes *attr in [self.collisionBehavior.items copy])
{
if([attr.indexPath isEqual:indexPath]) {
[self.collisionBehavior removeItem:attr];
}
}

8
9
10
11
12
13
14
15
16
17
18
19
20
21

// And remove the entry from our dictionary


[_attachmentBehaviors removeObjectForKey:indexPath];

22
23
24

Day 5: UIDynamics and Collection Views

34

This method is slightly more complicated than we would like. Removing the attachment behavior is as we
would expect, but removing the item from the shared behaviors is a little more complicated. The item objects
have been copied, and have different references. Therefore we need to search though all of the items the
gravity behavior is acting upon, and remove the one with the same index path. Hence we loop through the
items searching for the item with the same index path.
There is one more method on the API of the behavior manager - updateItemCollection:. This method takes a
collection of items and then calls the addItem:anchor: and removeItem: methods with the correct arguments
to ensure that the manager is currently managing the correct items. Well see very soon why this is useful,
but lets take a look at the implementation:
1
2
3
4
5
6

- (void)updateItemCollection:(NSArray *)items
{
// Let's find the ones we need to remove. We work in indexPaths here
NSMutableSet *toRemove = [NSMutableSet
setWithArray:[self.attachmentBehaviors allKeys]];
[toRemove minusSet:[NSSet setWithArray:[items valueForKeyPath:@"indexPath"]]];

// Let's remove any we no longer need


for (NSIndexPath *indexPath in toRemove) {
[self removeItemAtIndexPath:indexPath];
}

8
9
10
11
12

// Find the items we need to add springs to. A bit more complicated =(
// Loop through the items we want
NSArray *existingIndexPaths = [self currentlyManagedItemIndexPaths];
for(UICollectionViewLayoutAttributes *attr in items) {
// Find whether this item matches an existing index path
BOOL alreadyExists = NO;
for(NSIndexPath *indexPath in existingIndexPaths) {
if ([indexPath isEqual:attr.indexPath]) {
alreadyExists = YES;
}
}
// If it doesn't then let's add it
if(!alreadyExists) {
// Need to add
[self addItem:attr anchor:attr.center];
}
}

13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

Its a very simple method - we first find the items we need to remove - using some simple set operations
({Items we currently have} / {Items we should have}). Then we loop through the resultant set and call the
removeItem: method.

Day 5: UIDynamics and Collection Views

35

To work out the items we need to add we loop try to find each item in the collection weve been sent in our
dictionary of managed items. If we cant find it then we need to start managing the behavior for it, so we
call the addItem:anchor: method. Importantly, the anchor point is the current center position provided in the
UIDynamicItem object. In terms of the UICollectionView, this means that we want our item to be anchored
to the position the flow layout would like to place them.

Using the manager in the collection view layout


Now weve created the behavior manager, weve actually implemented nearly all of the UIDynamics code we
need. All that remains is to wire it up to the collection view layout. This wiring up takes the form of overriding
several methods in our UICollectionViewFlowLayout subclass: SCSpringyCarousel.
We had already overridden prepareLayout to force the flow layout to take the form of a horizontal carousel.
We now add more to that method to ensure that the dynamic animator has all the relevant items under its
control:
1
2
3
4
5
6
7

- (void)prepareLayout
{
// We update the section inset before we layout
self.sectionInset = UIEdgeInsetsMake(
CGRectGetHeight(self.collectionView.bounds) - _itemSize.height,
0, 0, 0);
[super prepareLayout];

// Get a list of the objects around the current view


CGRect expandedViewPort = self.collectionView.bounds;
expandedViewPort.origin.x -= 2 * _itemSize.width;
expandedViewPort.size.width += 4 * _itemSize.width;
NSArray *currentItems = [super layoutAttributesForElementsInRect:expandedViewPort];

9
10
11
12
13
14

// We update our behavior collection to contain the items we can currently see
[_behaviorManager updateItemCollection:currentItems];

15
16
17

The first few lines of code are exactly as before. We then work out an expanded viewport bounds. This involves
taking the current viewport and expanding it to the left and right, ensuring that the items which are soon
to appear on screen are under the control of our dynamic animator. Once we have the viewport we ask our
superclass for the layout attributes for all the items which would appear within this rectangle - i.e. all the
items which would have appeared within that range had we been using a vanilla flow layout. Like UIView
these UICollectionViewLayoutAttributes objects all adopt the UIDynamicItem protocol, and hence can be
animated by our UIDynamicAnimator. We pass this collection of objects through to our behavior manager to
ensure that we are managing the behavior of the correct items.
The next method we need to override is shouldInvalidateLayoutForBoundsChange:. We dont actually want
to change the behavior of this method (the default returns NO and we wont change this), but it gets called
whenever the bounds of our collection view changes. In the world of scroll views, the bounds property

Day 5: UIDynamics and Collection Views

36

represents the current viewport position - i.e. the x and y values are not necessarily 0 as they usually are.
Therefore, a bounds change event in a UIScrollView subclass actually occurs as the scrollview is scrolled.
This method is the most complicated part of this demo project, so well step through it bit-by-bit.
1
2
3

- (BOOL)shouldInvalidateLayoutForBoundsChange:(CGRect)newBounds
{
CGFloat scrollDelta = newBounds.origin.x - self.collectionView.bounds.origin.x;

CGPoint touchLocation = [self.collectionView.panGestureRecognizer


locationInView:self.collectionView];

5
6
7

for (UIAttachmentBehavior *bhvr in [_behaviorManager.attachmentBehaviors allValues]) {


CGPoint anchorPoint = bhvr.anchorPoint;
CGFloat distFromTouch = ABS(anchorPoint.x - touchLocation.x);

8
9
10
11

UICollectionViewLayoutAttributes *attr = [bhvr.items firstObject];


CGPoint center = attr.center;
CGFloat scrollFactor = MIN(1, distFromTouch / 500);

12
13
14
15

center.x += scrollDelta * scrollFactor;


attr.center = center;

16
17
18

[_dynamicAnimator updateItemUsingCurrentState:attr];

19

20
21

return NO;

22
23

1. Firstly we find out how much we have just scrolled the scroll view - since we were last called, and
hence last updated our springs.
2. We can then find the location of the current touch within the collection view, since we have access to
the panGestureRecognizer of the underlying scrollview.
3. Now we need to loop through each of the springs in the behavior manager, updating them.
4. Firstly we find out how far our items rest position (i.e. the behaviors anchor point) is from the touch.
This is because were going to stretch the springs proportionally to how far they are from our touch
point.
5. Then we work out the new position of the current cell - using a magic scrollFactor and the actual
scrollDelta.
6. We tell the dynamic animator that it should refresh its understanding of the items state. When an item
is added to a dynamic animator it makes an internal copy of the items state and then animates that.
In order to push new state in we update the UIDynamicItem properties and then tell the animator that
it should reload the state of this item.
7. Finally we return NO - we are letting the dynamic animator manage the positions of our cells, we dont
need the collection view to re-request it from the layout.

37

Day 5: UIDynamics and Collection Views

There are 2 more methods we need to override, the purpose of both is to remove the responsibility of item
layout from the flow layout class, and give it instead to the dynamic animator:
1
2
3
4

- (NSArray *)layoutAttributesForElementsInRect:(CGRect)rect
{
return [_dynamicAnimator itemsInRect:rect];
}

5
6
7
8
9
10

- (UICollectionViewLayoutAttributes *)layoutAttributesForItemAtIndexPath:
(NSIndexPath *)indexPath
{
return [_dynamicAnimator layoutAttributesForCellAtIndexPath:indexPath];
}

The dynamic animator has 2 helper methods for precisely this purpose, which plug nicely into the collection
view layout class. These methods are used by the collection view to position the cells. We simply get the
dynamic animator to return the positions of the relevant cells - either by indexPath or for the cells which are
visible in the specified rectangle.

Test run
Well, if you run this project up now you should have a horizontal carousel, which as you drag items around
you get a springy effect - where cells ahead of the drag direction bunch up, and those behind spread out.

Carousel scrolling with spring effect

Day 5: UIDynamics and Collection Views

38

Inserting items
Now that weve got this springy carousel working, were going to see how difficult it is to govern adding new
cells using the dynamic animator as well as scrolling. Weve actually done a lot of the work, so lets see what
we need to add.
With a standard UICollectionView, the layout provides the layout attributes for an appearing item, and
then the item will be animated to its final position within the collection - i.e. the position returned
by layoutAttributesForItemAtIndexPath:. However, we are going to perform the animation using our
UIDynamicAnimator, and therefore need to prevent UIView animations. To do this add the following line to
prepareLayout:
1

[UIView setAnimationsEnabled:NO];

This will ensure that we dont have 2 different animation processes fighting against each other.
As mentioned, the UICollectionViewLayout will get called to ask for where a new item should be positioned,
using the snappily named initialLayoutAttributesForAppearingItemAtIndexPath: method. We are going
to let our animator handle this:
1
2
3
4
5

- (UICollectionViewLayoutAttributes *)initialLayoutAttributesForAppearingItemAtIndexPath:
(NSIndexPath *)itemIndexPath
{
return [_dynamicAnimator layoutAttributesForCellAtIndexPath:itemIndexPath];
}

Now we actually need to do let the animator know that it a new item arriving, update the positions of the
existing items appropriately, and position the new one. We override the prepareForCollectionViewUpdates:
method on the SCSpringyCarousel class:
1
2
3
4
5
6

- (void)prepareForCollectionViewUpdates:(NSArray *)updateItems
{
for (UICollectionViewUpdateItem *updateItem in updateItems) {
if(updateItem.updateAction == UICollectionUpdateActionInsert) {
// Reset the springs of the existing items
[self resetItemSpringsForInsertAtIndexPath:updateItem.indexPathAfterUpdate];

7
8
9
10
11
12
13
14
15

// Where would the flow layout like to place the new cell?
UICollectionViewLayoutAttributes *attr =
[super initialLayoutAttributesForAppearingItemAtIndexPath:
updateItem.indexPathAfterUpdate];
CGPoint center = attr.center;
CGSize contentSize = [self collectionViewContentSize];
center.y -= contentSize.height - CGRectGetHeight(attr.bounds);

Day 5: UIDynamics and Collection Views

// Now reset the center of insertion point for the animator


UICollectionViewLayoutAttributes *insertionPointAttr = [self
layoutAttributesForItemAtIndexPath:updateItem.indexPathAfterUpdate];
insertionPointAttr.center = center;
[_dynamicAnimator updateItemUsingCurrentState:insertionPointAttr];

16
17
18
19
20

21

22
23

39

This is a long method, but can break it down into simple chunks:
1. This method gets called for inserts, removals and moves. Were only interested in insertions for this
project, so were only going to do something if our update is of type UICollectionUpdateActionInsert.
2. When an insert happens, the collection view will re-assign the layout attributes of those cells above
the insertion index to their nextmost neighbor - i.e. if inserting at index 4, then the cell currently at
5 will be updated to have the layout attributes of the cell currently at 6 etc. In our scenario we want
to keep the anchor point of the behavior associated with the layout attributes of our neighbor, but the
position should be our current position - not that of our neighbor. We perform this with a utility method
resetItemSpringsForInsertAtIndexPath:, which well look at later.
3. Now we deal with the new cell which is being inserted. We ask the flow layout where it would like to
position it. We want it to appear at the top of the collection view, so that the animator will drop it down
using the gravity behavior. We use this to work out where the center of the inserted cell should be.
4. Now we ask the animator for the layout attributes for the index path were inserting at, and then update
the position to match the one weve just calculated.
The final piece of the puzzle is the aforementioned method which is used to update the springs of the items
moved to make space for the new item:
1
2
3
4
5
6
7
8
9
10
11
12
13
14

- (void)resetItemSpringsForInsertAtIndexPath:(NSIndexPath *)indexPath
{
// Get a list of items, sorted by their indexPath
NSArray *items = [_behaviorManager currentlyManagedItemIndexPaths];
// Now loop backwards, updating centers appropriately.
// We need to get 2 enumerators - copy from one to the other
NSEnumerator *fromEnumerator = [items reverseObjectEnumerator];
// We want to skip the lastmost object in the array as we're copying left to right
[fromEnumerator nextObject];
// Now enumarate the array - through the 'to' positions
[items enumerateObjectsWithOptions:NSEnumerationReverse
usingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
NSIndexPath *toIndex = (NSIndexPath*)obj;
NSIndexPath *fromIndex = (NSIndexPath *)[fromEnumerator nextObject];

15
16

// If the 'from' cell is after the insert then need to reset the springs

Day 5: UIDynamics and Collection Views

if(fromIndex && fromIndex.item >= indexPath.item) {


UICollectionViewLayoutAttributes *toItem = [self
layoutAttributesForItemAtIndexPath:toIndex];
UICollectionViewLayoutAttributes *fromItem = [self
layoutAttributesForItemAtIndexPath:fromIndex];
toItem.center = fromItem.center;
[_dynamicAnimator updateItemUsingCurrentState:toItem];
}

17
18
19
20
21
22
23
24

}];

25
26

40

We have already explained the concept above, and the implementation is pretty simple to follow. We use
2 reverse iterators, and copy the position of the cell from one to the other. Then, when the collection view
updates the layout attributes of the cells, the springs will be set to pull them from their old position to their
new one.
We just need to add a button and method to the view controller to manage the item additions. We add the
button in the StoryBoard, and attach it to the following method:
1
2
3

- (IBAction)newViewButtonPressed:(id)sender {
// What's the new number we're creating?
NSNumber *newTile = @([_collectionViewCellContent count]);

// We want to place it in at the correct position


NSIndexPath *rightOfCenter = [self indexPathOfItemRightOfCenter];

5
6
7

// Insert the new item content


[_collectionViewCellContent insertObject:newTile atIndex:rightOfCenter.item];

8
9
10

// Redraw
[self.collectionView insertItemsAtIndexPaths:@[rightOfCenter]];

11
12
13

Theres a utility method to work out the index which is the right hand side of the center of the currently
visible items:
1
2
3
4

- (NSIndexPath *)indexPathOfItemRightOfCenter
{
// Find all the currently visible items
NSArray *visibleItems = [self.collectionView indexPathsForVisibleItems];

5
6
7
8
9

// Calculate the middle of the current collection view content


CGFloat midX = CGRectGetMidX(self.collectionView.bounds);
NSUInteger indexOfItem;
CGFloat curMin = CGFLOAT_MAX;

41

Day 5: UIDynamics and Collection Views

10

// Loop through the visible cells to find the left of center one
for (NSIndexPath *indexPath in visibleItems) {
UICollectionViewCell *cell = [self.collectionView
cellForItemAtIndexPath:indexPath];
if (ABS(CGRectGetMidX(cell.frame) - midX) < ABS(curMin)) {
curMin = CGRectGetMidX(cell.frame) - midX;
indexOfItem = indexPath.item;
}
}

11
12
13
14
15
16
17
18
19
20

// If min is -ve then we have left of centre. If +ve then we have right of centre.
if(curMin < 0) {
indexOfItem += 1;
}

21
22
23
24
25

// And now get the index path to pass back


return [NSIndexPath indexPathForItem:indexOfItem inSection:0];

26
27
28
29

And with that were done. Fire up the app and try adding cells - they drop nicely in and then bounce really cool. Try pressing adding cells whilst the carousel is scrolling - this shows how awesome the dynamic
animator really is!

New cell being inserted under gravity

Day 5: UIDynamics and Collection Views

42

Conclusion
In day 0 we showed how easy the UIKit Dynamics physics engine is to use, but with todays post weve really
got to grips with a real-world example - using it to animate the cells in a collection view. This has some
excellent applications, and despite its apparent complexity, is actually pretty easy to get your head around. I
encourage you to investigate adding subtle animations you collection views, which will delight users, albeit
subconsciously.

Day 6: TintColor
A fairly small an seemingly unobtrusive addition to UIView, the tintColor property is actually incredibly
powerful. Today well look at how to use it, including tinting iOS standard controls, using tintColor in our
own controls and even how to recolor images.

Tint color of existing iOS controls


UIView adds a new property in iOS7 - tintColor. This is a UIColor and is used by UIView subclasses to change
the appearance of an app. tintColor is nil by default, which means that it will use its parent in the view
hierarchy for its tint color. If no parents in the view hierarchy have a tintColor set then the default system

blue color will be used. Therefore, its possible to completely change the appearance of an entire app by setting
the tintColor on the view associated with the root view controller.
To demonstrate this, and to see how tintColor changes the appearance of some standard controls, take a look
at the ColorChanger app.
The storyboard contains a selection of controls - including UIButton, UISlider and UIStepper. Weve linked
a change color button to the following method in the view controller:
1
2
3
4
5
6
7
8
9
10
11

- (IBAction)changeColorHandler:(id)sender {
// Generate a random color
CGFloat hue = ( arc4random() % 256 / 256.0 );
CGFloat saturation = ( arc4random() % 128 / 256.0 ) + 0.5;
CGFloat brightness = ( arc4random() % 128 / 256.0 ) + 0.5;
UIColor *color = [UIColor colorWithHue:hue
saturation:saturation
brightness:brightness
alpha:1];
self.view.tintColor = color;
}

The majority of this method is concerned with generating a random color - the final line is all that is needed
to change the tint color, and hence the appearance of all the different controls.
One UI control which doesnt respond to tintColor changes as you might expect is UIProgressView. This is
because it actually has 2 tint colors - one for the progress bar itself, and one for the background track. In order
to get this to change color along with the other UI controls, we add the following method:

Day 6: TintColor

1
2
3
4

44

- (void)updateProgressViewTint
{
self.progressView.progressTintColor = self.view.tintColor;
}

This gets called at the end of changeColorHandler:.

Tint Dimming
In addition to being able to set a tint color, there is another property on UIView, which allows you to
dim the tint color - hence dimming an entire view hierarchy. This property is tintAdjustmentMode and
can be set to one of three values: UIViewTintAdjustmentModeNormal, UIViewTintAdjustmentModeDimmed or
UIViewTintAdjustmentModeAuto. To demonstrate the effects this has weve added a UISwitch and wired up
its valueChanged event to the following method:

Day 6: TintColor

1
2
3
4
5
6
7
8

45

- (IBAction)dimTintHandler:(id)sender {
if(self.dimTintSwitch.isOn) {
self.view.tintAdjustmentMode = UIViewTintAdjustmentModeDimmed;
} else {
self.view.tintAdjustmentMode = UIViewTintAdjustmentModeNormal;
}
[self updateProgressViewTint];
}

When you flick the switch youll see that all the regions which are usually the tint color, now dim to a gray
color. This is especially useful if you want to display a modal popup, and want to dim the background so as
not to detract attention from the content you want the user to be concentrating on.

Using tint color in custom views


There is a new method on UIView which gets called whenever the tintColor property (or similarly, the
tintAdjustmentMode property) gets changed in such a way that it affects this view. i.e. it changes on the
current view, or if the current view has a nil value for tintColor then when the tintColor of the ancestor
in the UIView hierarchy whose tintColor were adopting changes.
To demonstrate how this works well build a really simple UIView subclass. It will contain a solid block of the
tint color, a label which has a text color the same as the tint color, and a label whose text color will remain
gray.
1
2
3
4
5

@implementation SCSampleCustomControl {
UIView *_tintColorBlock;
UILabel *_greyLabel;
UILabel *_tintColorLabel;
}

6
7
8
9
10
11
12
13
14
15
16

- (id)initWithCoder:(NSCoder *)aDecoder
{
self = [super initWithCoder:aDecoder];
if(self)
{
self.backgroundColor = [UIColor clearColor];
[self prepareSubviews];
}
return self;
}

17
18
19
20
21

- (void)prepareSubviews
{
_tintColorBlock = [[UIView alloc] init];
_tintColorBlock.backgroundColor = self.tintColor;

Day 6: TintColor

46

[self addSubview:_tintColorBlock];

22
23

_greyLabel = [[UILabel alloc] init];


_greyLabel.text = @"Grey label";
_greyLabel.textColor = [UIColor grayColor];
[_greyLabel sizeToFit];
[self addSubview:_greyLabel];

24
25
26
27
28
29

_tintColorLabel = [[UILabel alloc] init];


_tintColorLabel.text = @"Tint color label";
_tintColorLabel.textColor = self.tintColor;
[_tintColorLabel sizeToFit];
[self addSubview:_tintColorLabel];

30
31
32
33
34
35
36

}
@end

This first chunk of code creates the three aforementioned elements, and sets their initial colors. Note that
since were being created from a story board, we need to set the sizes of each of our components inside
layoutSubviews:
1
2
3
4

- (void)layoutSubviews
{
_tintColorBlock.frame = CGRectMake(0, 0, CGRectGetWidth(self.bounds) / 3,
CGRectGetHeight(self.bounds));

CGRect frame = _greyLabel.frame;


frame.origin.x = CGRectGetWidth(self.bounds) / 3 + 10;
frame.origin.y = 0;
_greyLabel.frame = frame;

6
7
8
9
10

frame = _tintColorLabel.frame;
frame.origin.x = CGRectGetWidth(self.bounds) / 3 + 10;
frame.origin.y = CGRectGetHeight(self.bounds) / 2;
_tintColorLabel.frame = frame;

11
12
13
14
15

So far weve done nothing new or clever - weve just built up a simple UIView subclass in code. The interesting
part comes now - when we override the new tintColorDidChange method:

Day 6: TintColor

1
2
3
4
5

47

- (void)tintColorDidChange
{
_tintColorLabel.textColor = self.tintColor;
_tintColorBlock.backgroundColor = self.tintColor;
}

All were doing here is setting the colors of the views we want to respect the tintColor.
And thats it. The tint color changing code in the view controller doesnt need to change. Because of the way
that tintColor works with the UIView hierarchy we dont have to touch anything else.

Tinting images with tintColor


The final rather cool part of the tintColor story is the ability to recolor images using the views tint color.
Image tinting takes any pixels which have a alpha value of 1 and sets them to the tint color. All other pixels

Day 6: TintColor

48

are set to transparent. This is ideal for adding image backgrounds to custom controls etc.
In this demo well show how to recolor the famous Shinobi ninja head logo.
Weve added UIImageView to our storyboard, and created an outlet called tintedImageView in the view
controller. Then in viewDidLoad we add the following code:
1
2
3
4
5
6
7

// Load the image


UIImage *shinobiHead = [UIImage imageNamed:@"shinobihead"];
// Set the rendering mode to respect tint color
shinobiHead = [shinobiHead imageWithRenderingMode:UIImageRenderingModeAlwaysTemplate];
// And set to the image view
self.tintedImageView.image = shinobiHead;
self.tintedImageView.contentMode = UIViewContentModeScaleAspectFit;

We first load the image, and then we call imageWithRenderingMode: to change the rendering mode to
UIImageRenderingModeAlwaysTemplate. Other options here are UIImageRenderingModeAlwaysOriginal and
UIImageRenderingModeAutomatic. The automatic version is default, in which case the mode will change
according to the context of the images use - e.g. tab bars, toolbars etc. automatically use their foreground
images as template images.
Once weve set the image mode to templated, we simply set it as the image for our image view, and set the
scaling factor to ensure the ninjas head doesnt get squashed.

Day 6: TintColor

49

Conclusion
On the surface tintColor seems a really simple addition to UIView, however, it actually represents some
incredibly powerful appearance customization functionality. If youre creating your own UIView subclasses
or custom controls, then I encourage you to make sure that you implement tintColorDidChange - itll make
your work a lot more in-line with the standard UIKit components.

Day 7: Taking Snapshots of UIViews


Introduction
It has always been possible to take snapshots of UIView objects - and there are several reasons that you might
want to - from improving the performance of animations to sharing screenshots of your app. The existing
approach has suffered from several issues:

The code isnt very simple


Complex rendering options such as layer masks have been difficult to reproduce
OpenGL layers have required special case code
The snapshotting process has been quite slow

In fact, there isnt really any generic snapshot code which can cope with every possible scenario.
This has all changed with iOS7 with new methods on UIView and UIScreen which allow easy snapshotting
for a variety of use cases.

Snapshotting for Animation


Often we might want to animate a view, but that view is sufficiently complex that animating it is either too
intensive, or would involve additional code to control its behavior correctly.
As an example in the project associated with this post weve created a UIView subclass which simply consists
of a set of subviews, each of which is rotated to generate a pleasing geometric arrangement:

51

Day 7: Taking Snapshots of UIViews

Rotating Views

This is generated by calling the following method in the constructor:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

- (void)generateRotations
{
for (CGFloat angle = 0; angle < 2 * M_PI; angle += M_PI / 20.0) {
UIView *newView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 200, 250)];
newView.center = CGPointMake(CGRectGetMidX(self.bounds),
CGRectGetMidY(self.bounds));
newView.layer.borderColor = [UIColor grayColor].CGColor;
newView.layer.borderWidth = 1;
newView.backgroundColor = [UIColor colorWithWhite:0.8 alpha:0.4];
newView.transform = CGAffineTransformMakeRotation(angle);
newView.autoresizingMask = UIViewAutoresizingFlexibleHeight |
UIViewAutoresizingFlexibleWidth;
[self addSubview:newView];
}
}

Day 7: Taking Snapshots of UIViews

52

In creating this view Im not suggesting that its the best way to create this effect, or indeed that it is useful,
but it does demonstrate a point.
In the view controller well create a couple of utility methods which well use repeatedly in this project. The
first creates one of these rotating views and adds it as a subview:
1
2
3
4
5

- (void)createComplexView
{
_complexView = [[SCRotatingViews alloc] initWithFrame:self.view.bounds];
[self.containerView addSubview:_complexView];
}

The second is a sample animation method, which animates a view supplied by reducing its size to (0,0):
1
2
3
4
5
6
7
8
9
10
11
12
13

- (void)animateViewAwayAndReset:(UIView *)view
{
[UIView animateWithDuration:2.0
animations:^{
view.bounds = CGRectZero;
}
completion:^(BOOL finished) {
[view removeFromSuperview];
[self performSelector:@selector(createComplexView)
withObject:nil
afterDelay:1];
}];
}

When the animation is complete it removes the supplied view, and then after a short delay resets the app by
recreating a new _complexView.
The following method is linked up to the toolbar button labelled Animate:
1
2
3

- (IBAction)handleAnimate:(id)sender {
[self animateViewAwayAndReset:_complexView];
}

The following picture demonstrates the problem that we have animating the rotating view weve created:

53

Day 7: Taking Snapshots of UIViews

Animate

This problem definitely isnt insurmountable, but it would involve us changing the way SCRotatingViews is
constructed.
The new snapshotting methods come to the rescue here though. The following method is wired up to the
SShot toolbar button:
1
2
3
4
5
6

- (IBAction)handleSnapshot:(id)sender {
UIView *snapshotView = [_complexView snapshotViewAfterScreenUpdates:NO];
[self.containerView addSubview:snapshotView];
[_complexView removeFromSuperview];
[self animateViewAwayAndReset:snapshotView];
}

We call snapshotViewAfterScreenUpdates: to create a snapshot of our complex view. This returns a UIView
which represents the appearance of the view it has been called on. Its an incredibly efficient way of getting
a snapshot of the view - faster than the old method of making a bitmap representation.
Once weve got our snapshot view we add it to the container view, and remove the actual complex view. Then
we can animate the snapshot view:

54

Day 7: Taking Snapshots of UIViews

Snapshot

Pre/post View Updates


The snapshotViewAfterScreenUpdates: has a single BOOL argument, which specifies whether the snapshot
should be taken immediately, or whether any pending view updates should be committed first.
For example, we add the following method to the SCRotatingViews class:
1
2
3
4
5
6

- (void)recolorSubviews:(UIColor *)newColor
{
for (UIView *subview in self.subviews) {
subview.backgroundColor = newColor;
}
}

This simply recolors all the subviews when called.


To demonstrate the effect of the argument on the snapshot method we create 2 methods on the view controller,
and wire them up to the Pre and Post toolbar buttons:

Day 7: Taking Snapshots of UIViews

1
2
3
4
5
6
7
8
9

55

- (IBAction)handlePreUpdateSnapshot:(id)sender {
// Change the views
[_complexView recolorSubviews:[[UIColor redColor] colorWithAlphaComponent:0.3]];
// Take a snapshot. Don't wait for changes to be applied
UIView *snapshotView = [_complexView snapshotViewAfterScreenUpdates:NO];
[self.containerView addSubview:snapshotView];
[_complexView removeFromSuperview];
[self animateViewAwayAndReset:snapshotView];
}

10
11
12
13
14
15
16
17
18
19

- (IBAction)handlePostUpdateSnapshot:(id)sender {
// Change the views
[_complexView recolorSubviews:[[UIColor redColor] colorWithAlphaComponent:0.3]];
// Take a snapshot. This time, wait for the render changes to be applied
UIView *snapshotView = [_complexView snapshotViewAfterScreenUpdates:YES];
[self.containerView addSubview:snapshotView];
[_complexView removeFromSuperview];
[self animateViewAwayAndReset:snapshotView];
}

The methods are identical, apart from the argument to the snapshotViewAfterUpdates: method. Firstly
we call the recolorSubviews: method, then perform the same snapshot procedure we did in the previous
example. The following images show the difference in behavior of the 2 methods:

Day 7: Taking Snapshots of UIViews

56

As expected, setting NO will snapshot immediately, and therefore doesnt include the result of the recoloring
method call. Setting YES allows the render loop to complete the currently queued changes before snapshotting.

Snapshotting to an image
When animating its actually far more useful to be able to snapshot straight to a UIView, however there
are times when its helpful to have an actual image. For example, we might want to blur the current
view before animating it away. There is another snapshotting method on UIView for this exact purpose:
drawViewHierarchyInRect:afterScreenUpdates:. This will allow you to draw the view into a core graphics
context, and hence you can get hold of a bitmap for the current view. Its worth noting that this method is
significantly less efficient than snapshotViewAfterScreenUpdates:, but if you need a bitmap representation
then this is the best way to go about it.
We wire the following method up to the Image toolbar button:

Day 7: Taking Snapshots of UIViews

1
2
3
4
5

57

- (IBAction)handleImageSnapshot:(id)sender {
// Want to create an image context - the size of view and the scale of the screen
UIGraphicsBeginImageContextWithOptions(_complexView.bounds.size, NO, 0.0);
// Render our snapshot into the image context
[_complexView drawViewHierarchyInRect:_complexView.bounds afterScreenUpdates:NO];

// Grab the image from the context


UIImage *complexViewImage = UIGraphicsGetImageFromCurrentImageContext();
// Finish using the context
UIGraphicsEndImageContext();

7
8
9
10
11

UIImageView *iv = [[UIImageView alloc] initWithImage:[self


applyBlurToImage:complexViewImage]];
iv.center = _complexView.center;
[self.containerView addSubview:iv];
[_complexView removeFromSuperview];
// Let's wait a bit before we animate away
[self performSelector:@selector(animateViewAwayAndReset:) withObject:iv afterDelay:1.0\

12
13
14
15
16
17
18
19
20

];
}

Firstly we create an core graphics image context, the correct size and scale for the _complexView, and then
call the drawHierarchyInRect:afterScreenUpdates: method - the second argument being the same as the
argument to the previous snapshotting method.
Then we pull the graphics context into a UIImage, which we display in a UIImageView, with the same pattern
of replacing the complex view and animating it out. To demonstrate a possible reason for needing a UIImage
rather than a UIView weve created a method which blurs a UIImage:
1
2
3
4
5
6
7
8
9
10
11
12
13

- (UIImage *)applyBlurToImage:(UIImage *)image


{
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *ci_image = [CIImage imageWithCGImage:image.CGImage];
CIFilter *filter = [CIFilter filterWithName:@"CIGaussianBlur"];
[filter setValue:ci_image forKey:kCIInputImageKey];
[filter setValue:@5 forKey:kCIInputRadiusKey];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent]];
return [UIImage imageWithCGImage:cgImage
scale:image.scale
orientation:image.imageOrientation];
}

This is a simple application of a CoreImage filter, and just applies a Gaussian filter and returns a new UIImage.
The following is a shot of the effect weve created:

58

Day 7: Taking Snapshots of UIViews

Snapshot to graphics context to allow blurring

Limitations
If youve ever tried to take a snapshot of a OpenGL-backed UIView youll know that it is quite an involved
process (users of ShinobiCharts might be familiar with the pain). Excitingly the new UIView snapshot methods
handle OpenGL seamlessly.
Because the snapshot methods create versions which respect the appearance of the views on-screen, they are
only able to snapshot views which are on-screen. This means its not possible to use these methods to create
snapshots of views which you want to animate into view - an alternative approach must be used. It also means
that if your view is clipped by the edge of the screen, then your snapshot will be clipped, as shown here:

59

Day 7: Taking Snapshots of UIViews

Snapshotting views clips them to the visible region

Conclusion
Taking snapshots of UIView elements in iOS has always been really useful, and with iOS7 weve finally got a
sensible API method to allow us to take snapshots of views for most of the common purposes. That doesnt
mean that there arent and limitations - youll still need to use alternative approaches for some scenarios, but
90% of use cases just got a whole lot easier!

Day 8: Reading list with SafariServices


Introduction
The concept of a reading list is a simple one - often when youre browsing youll come across an article you
want to read, but dont have time to read it immediately. A reading list is a way to temporarily bookmark the
page so that it can be read later. There are various 3rd party reading list apps, but with iOS7 SafariServices
exposes an API for the reading list which is integral to Safari.

Usage
Using the Safari reading list is remarkably easy - there are just 3 methods of interest. A reading list item
consists of a URL, a title and a description. The only URLs which are acceptable are of type HTTP or HTTPS
- you can check the validity of a URL using the supportsURL: class method:
1
2
3

if([SSReadingList supportsURL:[NSURL urlFromString:@"http://sample/article/url"]]) {


NSLog(@"URL is supported");
}

Once youve checked that the URL you want to add is valid adding it involves getting hold of the default
reading list and calling the add method:
1
2
3
4
5
6
7
8
9
10
11

SSReadingList *readingList = [SSReadingList defaultReadingList];


NSError *error;
[readingList addReadingListItemWithURL:[NSURL urlFromString:@"http://sample/article/url"]
title:@"Item Title"
previewText:@"Brief preview text"
error:&error];
if(error) {
NSLog(@"There was a problem adding to a reading list");
} else {
NSLog(@"Successfully added to reading list");
}

Thats all there is to it! The pic below shows Safaris updated reading list:

61

Day 8: Reading list with SafariServices

Reading list in Safari

Sample project
The sample project for this article pulls down the RSS feed from the ShinobiControls blog and displays them
in a table view. The detail page contains a toolbar button which allows the user to Read Later - i.e. add to
their Safari reading list.
Its worth noting that the entirety of the code were interesting in for this article is in the method called when
the button is pressed:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

- (IBAction)readLaterButtonPressed:(id)sender {
if([SSReadingList supportsURL:[self.detailItem url]]) {
SSReadingList *readingList = [SSReadingList defaultReadingList];
NSError *error;
[readingList addReadingListItemWithURL:[self.detailItem url]
title:[self.detailItem title]
previewText:[self.detailItem description]
error:&error];
if(error) {
NSLog(@"There was a problem adding to a reading list");
} else {
NSLog(@"Successfully added to reading list");
}
}
}

The point of the app isnt to demonstrate how to build an RSS parser, and as such the RSS feed is munged into
a JSON feed by Yahoo! pipes.

Conclusion
A pretty short article today, revealing one of the lesser noticed features of iOS7. It isnt groundbreaking, but
if your app has content which might be suitable for adding to the Safari reading list then its definitely worth
the 10 minutes it takes to add the functionality.

Day 9: Device Identification


Introduction
Todays post will be quite brief, but is an important one for any developers who have have been using the
device unique ID to track their users. There are many reasons that you might want to use the device ID,
however it can also be deemed a privacy concern - allowing tracking of users without their permission.
The device UDID was deprecated in iOS5, and has been removed in iOS7. iOS6 introduced alternatives, which
are now the only approaches which are supported:

Vendor Identification
The closest replacement for uniqueIdentifier is another method on UIDevice - identifierForVendor, which
returns a NSUUID. This is shared between all apps from the same vendor on the same device. Different vendors
on the same device will return different identifierForVendor values, as will the same vendor across different
devices.
This value provides pretty much the same functionality from the point of view of the app developer, but
without the privacy concerns for the user.
It is worth noting that if a user uninstalls all apps for a specified vendor then the vendor ID will be destroyed.
When they install another app from that vendor a new vendor ID will be generated.

Advertising Identification
If you need a unique ID for the purposes of implementing in-app advertising (irrespective of whether it
is iAd or not) then an alternative approach is required. The AdSupport module includes a class called
ASIdentifierManager which has a advertisingIdentifier method. This returns a NSUUID which may be
used for the purposes of tracking advertising. There is also a method advertisingTrackingEnabled, which
returns a BOOL specifying whether or not a user has allowed advertising tracking. If the return value is NO then
there is a short list of things that the app is allowed to use the ID for - none of which involves tracking users.
The advertising ID is unique across an entire device - so that if tracking is enabled ads can be tailored to the
specific user. More often than not an app developer wont have to interact with this class, but will instead
drop in an ad-serving framework which will use the ASIdentifierManager class behind the scenes.

Network Identification
When uniqueIdentifier was deprecated, using the devices MAC address became popular. A MAC address
is a unique identifier allocated to every piece of networking equipment in the world - from WiFi adaptors

63

Day 9: Device Identification

to datacenter switches. Its possible to query an iOS device for its MAC address, which will be both unique
and persistent - so ideal for tracking. However, with iOS7, Apple have made it impossible to obtain the MAC
address programmatically on an iOS device - in fact a constant will be returned: 02:00:00:00:00:00. This
closes this loophole and will drive developers to the Apple-preferred device identification approaches.

Who Am I?

Conclusion
Apple are stamping out the alternatives to device identification, so nows the time to adopt their chosen
approach. This offers greater privacy for the end user, so its a good thing to do.
The attached sample project with this post (WhoAmI) gives a brief demo of the different approaches weve
outlined here.

Day 10: Custom UIViewController Transitions


A much requested feature has to be able to customise the animations which appear as a user transitions
between different view controllers, both for UINavigationController stacks, and modal presentation. iOS7
introduces this functionality - both for automatic transitions, and interactive transitions (where the transitions
are controlled interactively by the user). In todays post well take a look at how to get an automatic transition
working - by implementing a fade transition for push and pops on a navigation controller.

Navigation Controller Delegate


The world of custom transitions is full of protocols - however, for the example were going to create here
we only need to look at a few. The additional protocols are required for interactive transitions, and modal
presentation.
In order to determine what transition should be used when pushing or popping a view controller, a
UINavigationController has a delegate. This delegate must adopt the UINavigationControllerDelegate
protocol, which has 4 new methods for transitioning. The method were interested in for our custom transition
is:
1
2
3
4

- (id<UIViewControllerAnimatedTransitioning>)navigationController:
animationControllerForOperation:
fromViewController:
toViewController:

This method will get called every time the navigation controller is transitioning between view controllers
(whether through code or through a segue in a story board). We get told the view controller were transitioning
from and to, so at this point we can make a decision what kind of transition we need to return.
We create a class which will act as the nav controller delegate:
1
2

@interface SCNavControllerDelegate : NSObject <UINavigationControllerDelegate>


@end

Which has a simple implementation:

Day 10: Custom UIViewController Transitions

1
2
3
4
5
6
7
8
9
10

65

@implementation SCNavControllerDelegate
- (id<UIViewControllerAnimatedTransitioning>)
navigationController:(UINavigationController *)navigationController
animationControllerForOperation:(UINavigationControllerOperation)operation
fromViewController:(UIViewController *)fromVC
toViewController:(UIViewController *)toVC
{
return [SCFadeTransition new];
}
@end

We want all of our transitions to be the same (whether forward or backward) and therefore we can just return
an SCFadeTransition object for every transition. Well look at what this object is and does in the next section.
Setting this delegate is simple - and the same as we see all over iOS:
1
2
3
4
5
6
7
8
9

- (id)initWithCoder:(NSCoder *)aDecoder
{
self = [super initWithCoder:aDecoder];
if(self) {
_navDelegate = [SCNavControllerDelegate new];
self.delegate = _navDelegate;
}
return self;
}

where _navDelegate is an ivar of type id<UINavigationControllerDelegate>.

Creating a custom transition


We saw that the delegate needs to return some kind of transition object. More specifically it has to return an
object which conforms to the UIViewControllerAnimatedTransitioning protocol. This protocol has just 3
methods on it, 2 of which are required:
transitionDuration: (required). This should return the duration of the animation. This is used by the
OS to synchronize other events - e.g. animating the nav bar on the nav controller.
animateTransition: (required). This method is where you will actually implement the animation to
transition between the view controllers. Were provided with an object which gives us access to the
different components were going to need.
animationEnded:. This gets called once the transition is complete to allow you to do any tidying up
that might be required.
We only need to implement the 2 required methods to get our fade transition working. Create an object which
adopts this protocol:

Day 10: Custom UIViewController Transitions

1
2

66

@interface SCFadeTransition : NSObject <UIViewControllerAnimatedTransitioning>


@end

The implementation of the transitionDuration: method is really simple:


1
2
3
4
5

- (NSTimeInterval)transitionDuration:
(id<UIViewControllerContextTransitioning>)transitionContext
{
return 2.0;
}

When the animateTransition: method is called we get provided with an object which conforms to the
UIViewControllerContextTransitioning protocol, which gives us access to all the bits and pieces we need
to complete the animation. The first method well use is viewControllerForKey: which allows us to get hold
of the two view controllers involved in the transition:
1
2
3
4
5

// Get the two view controllers


UIViewController *fromVC = [transitionContext
viewControllerForKey:UITransitionContextFromViewControllerKey];
UIViewController *toVC
= [transitionContext
viewControllerForKey:UITransitionContextToViewControllerKey];

The context also provides us with a UIView in which to perform the animations, and this is accessible through
the containerView method:
1
2

// Get the container view - where the animation has to happen


UIView *containerView = [transitionContext containerView];

We need to make sure that the views associated with each of the view controllers is a subview of the container
view. Its likely that the view were transitioning from is already a subview, but we ensure it:
1
2
3

// Add the two VC views to the container


[containerView addSubview:fromVC.view];
[containerView addSubview:toVC.view];

We dont want to see the view were transitioning to, so we should set its alpha to 0:
1

toVC.view.alpha = 0.0;

Now were in a position to perform the animation. Since were doing a simple fade between the two view
controllers, we can use a UIView animation block:

Day 10: Custom UIViewController Transitions

1
2
3
4
5
6
7
8
9
10
11
12

67

[UIView animateWithDuration:[self transitionDuration:transitionContext]


delay:0
options:0
animations:^{
toVC.view.alpha = 1.f;
}
completion:^(BOOL finished) {
// Let's get rid of the old VC view
[fromVC.view removeFromSuperview];
// And then we need to tell the context that we're done
[transitionContext completeTransition:YES];
}];

Points to note: - We set the duration to be the same as the transitionDuration: method weve implemented.
- The view associated with the from view controller needs to be removed from the view hierarchy once the
transition is completed. - The completeTransition: method on the transition context needs to be called once
weve finished the animation so that the OS knows that weve finished.

Summary
With that were done! Its actually quite simple once you get your head around the protocols. The only thing
we had to do with any of our existing view controller code was to set the delegate on the navigation view
controller. The rest of the work was implemented with classes which set a transition object, and then perform
the animation itself.
As ever, the code is available on GitHub. Happy transitioning!

Day 11: UIView Key-Frame Animations


Introduction
UIView has had animation methods since iOS2, adding the favored block-based API in iOS4. These methods
are wrapper methods for the underlying CoreAnimation layers, upon which UIView instances are rendered.

The animation methods in UIView have allowed animation of animatable properties (such as transform,
backgroundColor, frame, center etc) - by setting an end-state, duration and other options such as animation
curve. However, setting intermediate states in the animation, so-called key-frames, has not been possible.
In this case it was necessary to drop down to CoreAnimation itself and create a CAKeyFrameAnimation. This
changes in iOS7 - with the addition of 2 methods to UIView, key-frame animations are now supported without
dropping down to CoreAnimation.
To show how to use UIView key-frame animations were going to create a couple of demos which use it. The
first is an animation which changes the background color of a view through the colors of the rainbow, and
the second demonstrates a full 360 degree rotation of a view, specifying the rotation direction.

Rainbow Changer
UIView key-frame animations require the use of 2 methods, the first of which is similar to the other blockbased animation methods: animateKeyframesWithDuration:delay:options:animations:completion:. This

takes floats for duration and delay, a bit-mask for options and blocks for animation and completion - all pretty
standard in the world of UIView animations. The difference comes in the method we call inside the animation
block: addKeyframeWithRelativeStartTime:relativeDuration:animations:. This method is used to add
the fixed points within the animation sequence.
The best way to understand this is with a demonstration. We are going to create an animation which animates
the background color of a UIView through the colors of the rainbow (before we start a flamewar about what
the colors of the rainbow are, Ive made an arbitrary choice, which happens to be correct). Well trigger this
animation on a bar button press, so we add a bar button in the storyboard, and wire it up to the following
method:
1
2

- (IBAction)handleRainbow:(id)sender {
[self enableToolbarItems:NO];

3
4
5
6

void (^animationBlock)() = ^{
// Animations here
};

7
8
9

[UIView animateKeyframesWithDuration:4.0
delay:0.0

69

Day 11: UIView Key-Frame Animations

options:UIViewAnimationOptionCurveLinear |
UIViewKeyframeAnimationOptionCalculationModeLinear
animations:animationBlock
completion:^(BOOL finished) {
[self enableToolbarItems:YES];
}];

10
11
12
13
14
15
16

This calls the animateKeyframesWithDuration:delay:options:animations:completion: method, providing


the animationBlock previously defined as a local variable. When we start the animation we disable the toolbar
buttons, and then re-enable them when the animation is complete, using the following utility method:
1
2
3
4
5
6

- (void)enableToolbarItems:(BOOL)enabled
{
for (UIBarButtonItem *item in self.toolbar.items) {
item.enabled = enabled;
}
}

Well take a look at some of the options available when performing key-frame animations later - right now
lets fill out that animation block:
1
2
3
4
5
6
7

void (^animationBlock)() = ^{
NSArray *rainbowColors = @[[UIColor
[UIColor
[UIColor
[UIColor
[UIColor
[UIColor

orangeColor],
yellowColor],
greenColor],
blueColor],
purpleColor],
redColor]];

NSUInteger colorCount = [rainbowColors count];


for(NSUInteger i=0; i<colorCount; i++) {
[UIView addKeyframeWithRelativeStartTime:i/(CGFloat)colorCount
relativeDuration:1/(CGFloat)colorCount
animations:^{
self.rainbowSwatch.backgroundColor =
rainbowColors[i];
}];
}

9
10
11
12
13
14
15
16
17
18

};

We start by creating an array of the colors we want to animate through, before looping through each of them.
For each color we call the method to add a key-frame to the animation:

Day 11: UIView Key-Frame Animations

1
2
3
4
5

70

[UIView addKeyframeWithRelativeStartTime:i/(CGFloat)colorCount
relativeDuration:1/(CGFloat)colorCount
animations:^{
self.rainbowSwatch.backgroundColor = rainbowColors[i];
}];

For each key-frame we specify a start time, a duration and an animation block. The times are relative - i.e.
we specify them as floats in the range (0,1), and they will get scaled appropriately to match the animation
duration. Here we want the color changes to be evenly spaced throughout the animation, so we set the relative
start time of each animation to be the index of the current color over the total number of colors, and the
relative duration to be the 1 over the total number of colors. The animation block specifies the end state of
the animation, in the same manner it does for all UIView block-based animations, so here we just need to set
the background color.
If you run the app up now and press the Rainbow button then youll see your first UIView key-frame
animation in action.

Keyframe animation options


The options parameter of the animateKeyFrames: method accepts UIViewAnimationOptions arguments,
along with some new values specified in UIViewKeyframeAnimationOptions, notably the way in which the
animation phases are fitted to the animation curve. The following are the options which govern this behavior:

71

Day 11: UIView Key-Frame Animations

1
2
3
4
5

UIViewKeyframeAnimationOptionCalculationModeLinear
UIViewKeyframeAnimationOptionCalculationModeDiscrete
UIViewKeyframeAnimationOptionCalculationModePaced
UIViewKeyframeAnimationOptionCalculationModeCubic
UIViewKeyframeAnimationOptionCalculationModeCubicPaced

The graph below shows how the different options control the animation. The horizontal axis represents the
time of the animation, whereas the vertical axis represents a one-dimensional parameter we are animating
(this could be for example the alpha of a view, or the width of a frame). We have specified 3 key-frames in
this example, each with different durations and end values.

optionsGraph

Lets look at each of the options in more detail:


Linear The transitions between keyframes are linearly interpolated, as shown in red on the graph below.
This means that an animation can appear to speed up and slow down, as the animation deltas vary.
Discrete The transitions are instantaneous at the end of each keyframe duration, as shown in blue on
the graph. In this case there is effectively no animation - just jumps between the keyframe values.
Paced A simple algorithm which attempts to maintain a constant velocity between keyframe animation
points.
Cubic A cubic spline is drawn between the keyframe points, and then the animation occurs along this
line - as demonstrated in green. This could result in animations initially going in the opposite direction
to that you expect.
CubicPaced This ignores the timings specified in the keyframe animations and instead forces a constant
velocity between the different keyframe locations. A similar concept is demonstrated in pink on the
graph. This will result in a smooth looking animation, with constant velocity, but it will ignore the
timings you initially requested.

72

Day 11: UIView Key-Frame Animations

I would suggest that other than discrete, its worth playing around with the different options in your specific
example. Since the algorithms are complete black-boxes, and you have no control over their parameters, trying
to fully understand their operation is somewhat futile. An empirical approach to option selection will be more
fruitful in this case (this isnt usually true - its good to understand what the different options actually mean
rather than guessing).

Rotation Directions
As a bonus example, were also going to take a look at how to perform full rotations of views, specifying
direction. When you specify an animation then CoreAnimation will animate the shortest route from the start
state to the end state. Therefore with rotation transforms, we can only specify the start angle and the end
angle, but not the direction in which it will rotate. With key-frame animations we can overcome this by
specifying some intermediate states.
Therefore for a full-rotation clockwise we can write the following method:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

- (IBAction)handleRotateCW:(id)sender {
[self enableToolbarItems:NO];
[UIView animateKeyframesWithDuration:2.0
delay:0.0
options:UIViewKeyframeAnimationOptionCalculationModeLinear
animations:^{
[UIView addKeyframeWithRelativeStartTime:0.0
relativeDuration:1/3.0
animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(2.0 * M_PI / 3.0);
}];
[UIView addKeyframeWithRelativeStartTime:1/3.0
relativeDuration:1/3.0
animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(4.0 * M_PI / 3.0);
}];
[UIView addKeyframeWithRelativeStartTime:2/3.0
relativeDuration:1/3.0
animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(0);
}];

22

}
completion:^(BOOL finished) {
[self enableToolbarItems:YES];
}];

23
24
25
26
27

We perform a key-frame animation with 3 states, equally spaced throughout the animation duration. We start
with a rotation angle of 0, so next we move to 2/3, 4/3 before finishing back at 0. In order to completely

Day 11: UIView Key-Frame Animations

73

specify a rotation of 2, then we need to have exactly 2 intermediate fixed points, since as soon as there is an
angle difference of greater than then it will rotate in the opposite direction to that youd like. At an angle
difference of exactly , the behavior will be undefined.

In order to change the direction of rotation we can just reverse the key-frames i.e. starting at an angle of 0 we
then move to 4/3, followed by 2/3 before finishing back at 0:
1
2
3
4
5
6
7
8
9
10
11
12
13

- (IBAction)handleRotateCCW:(id)sender {
[self enableToolbarItems:NO];
[UIView animateKeyframesWithDuration:2.0
delay:0.0
options:UIViewKeyframeAnimationOptionCalculationModeLinear
animations:^{
[UIView addKeyframeWithRelativeStartTime:0.0
relativeDuration:1/3.0
animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(4.0 * M_PI / 3.0);
}];
[UIView addKeyframeWithRelativeStartTime:1/3.0
relativeDuration:1/3.0

Day 11: UIView Key-Frame Animations

74

animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(2.0 * M_PI / 3.0);
}];
[UIView addKeyframeWithRelativeStartTime:2/3.0
relativeDuration:1/3.0
animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(0);
}];

14
15
16
17
18
19
20
21
22

}
completion:^(BOOL finished) {
[self enableToolbarItems:YES];
}];

23
24
25
26
27

The Shinobi ninja head can now rotate in either a clockwise or a counter-clockwise direction - without having
to drop down to CoreAnimation layers.

Conclusion
UIView animations have always been a high-level way to perform simple animations on views, and have

benefited from being exceptionally simple to understand and build. Now, with the addition of key-frame
animations, more complex animations can now benefit from the same simple API. This post has demonstrated
how powerful it can be - with some trivial examples (although choosing a direction for rotation is a common
request).

Day 12: Dynamic Type


Introduction
iOS7 introduced a new high-level text-rendering framework called TextKit. TextKit is based on the extremely
powerful CoreText rendering engine, and all the Apple-provided text-based controls have been updated to
use the TextKit engine. TextKit is a significant addition to iOS, and one of the things it adds is the concept of
Dynamic Type, and font descriptors. Well look at these features of TextKit in todays post.

Dynamic Type
Dynamic type is a concept of allowing users to specify how large the typeface is in the apps on their device.
This isnt simply the ability to alter the font size, but also alter other properties of the type such as the kerning
and the line-spacing. This ensures that the text is the most readable as it can be at the different type sizes. In
order to do this you no longer specify particular fonts for your different text elements, but instead set what
they semantically represent, i.e. rather than specifying Helvetica 11pt, you would set the type to be body text.
This is in-line with the way in which something like HTML works - semantic markup of your text, allowing
the user to control the appearance. As such, rather than specifying fonts per-se, there is a new class method
on UIFont which will pull out the correct font:
1

self.subHeadingLabel.font = [UIFont preferredFontForTextStyle:UIFontTextStyleSubheadline];

There are 6 different text styles available in iOS7:

UIFontTextStyleHeadline
UIFontTextStyleBody
UIFontTextStyleSubheadline
UIFontTextStyleFootnote
UIFontTextStyleCaption1
UIFontTextStyleCaption2

As well as being able to specify the font via code, you can set it using interface builder:

76

Day 12: Dynamic Type

Dynamic type in interface builder

When combined with autolayout, using dynamic type means that a user can control the appearance of the
text inside your app. There is a Text Size options screen within the settings screens which allows changing
of the type size:

77

Day 12: Dynamic Type

Changing Size

There are a total of 7 different font sizes - the following shots demonstrate some of them:

Day 12: Dynamic Type

78

In future OS updates the specific font might change as the appearance of the operating system develops, but
by adopting dynamic type you can be assured that your app will both be accessible and match the OS style
with no further work down the line.

Font Descriptors
Another addition which TextKit brings in is the concept of font descriptors. These are much more in-line with
the way were used to thinking of fonts - where we can modify a font, as opposed to having to completely
specify a new one. For example, we have some text wed like to make the same font as our body text, but wed
like to make it bold. Previously in iOS we would have had to know the font being used for the body text, and
then find its bold equivalent, and then construct a new font object using fontWithName:size: with the string
name of the bold equivalent of the body font.
This isnt very intuitive, and with the introduction of dynamic type, its not always possible to know exactly
which font youre using. Font descriptors make this a lot easier to use - as a collection of attributes about a
font its possible to change attributes and hence change the font. For example, if we would like to get a bold
version of the body text font:
1
2
3
4
5

UIFontDescriptor *bodyFontDesciptor = [UIFontDescriptor


preferredFontDescriptorWithTextStyle:UIFontTextStyleBody];
UIFontDescriptor *boldBodyFontDescriptor = [bodyFontDesciptor
fontDescriptorWithSymbolicTraits:UIFontDescriptorTraitBold];
self.boldBodyTextLabel.font = [UIFont fontWithDescriptor:boldBodyFontDescriptor size:0.0];

First we get the descriptor for the body text style, and then using the fontDescriptorWithSymbolicTraits:
method we can override a so-called font trait. Then the UIFont method fontWithDescriptor:size: can be
used to actually get the required font - noting that setting the size: parameter to 0.0 will result in returning
the font sized as determined in the font descriptor.

Day 12: Dynamic Type

79

This is an example of modifying a UIFontDescriptor using using a font trait, other examples of which are as
follows:
UIFontDescriptorTraitItalic
UIFontDescriptorTraitExpanded
UIFontDescriptorTraitCondensed
Its also possible to specify other features of the font appearance (such as the type of serifs) using attributes.
Have a read of the documentation of UIFontDescriptorSymbolicTraits for more information.
As well as modifying a existing font descriptor, you can create a dictionary of attributes and then find a font
descriptor which matches your request. For example:
1
2
3
4
5
6

UIFontDescriptor *scriptFontDescriptor =
[UIFontDescriptor fontDescriptorWithFontAttributes:
@{UIFontDescriptorFamilyAttribute: @"Zapfino",
UIFontDescriptorSizeAttribute: @15.0}
];
self.scriptTextLabel.font = [UIFont fontWithDescriptor:scriptFontDescriptor size:0.0];

Were here specifying a font with a given family and size in a dictionary of attributes. Other attributes which
can be used include:

UIFontDescriptorNameAttribute
UIFontDescriptorTextStyleAttribute
UIFontDescriptorVisbileNameAttribute
UIFontDescriptorMatrixAttribute

This list is not exhaustive - UIFontDescriptor is incredibly powerful and brings iOS inline with many other
text rendering engines used elsewhere.

80

Day 12: Dynamic Type

Font Descriptor

Conclusion
Dynamic type is an incredibly useful tool to improve both the appearance and accessibility of your app. When
combined with autolayout it allows user content to be beautiful and easily readable. Font descriptors offer a
much easier way to work with fonts - much closer to the concept we hold in our heads from years of using
word processing software. It should make working with fonts a lot less painful. Weve only seen the tip of
the iceberg here today - type rendering is a complex concept, and with these new concepts iOS is providing
much easier access to the underlying engine.

Day 13: Route Directions with MapKit


Introduction
iOS7 saw a few changes and additions to MapKit - the mapping framework in iOS. One of the key examples is
the addition of an API which can provide routing directions between two points. In todays post were going
to take a look at how to use this API by building a simple routing application. This is also going to involve
taking a brief look a the overlay rendering API.

Requesting Directions
There are quite a lot of different classes which we need in MapKit, but its pretty simple to work through
them in turn. In order to query Apples servers for a set of directions, we need to encapsulate the details
in a MKDirectionsRequest object. This class has existed since iOS6 for use by apps which were capable of
generating their own turn-by-turn directions, but have been expanded in iOS7 to allow developers to request
directions from Apple themselves.
1

MKDirectionsRequest *directionsRequest = [MKDirectionsRequest new];

In order to make a request we need to set the source and the destination, both of which are MKMapItem objects.
These are objects which represent a location on a map, including its position and other metadata such as
name, phone number and URL. There are a couple of options for creating these - one of which is to use the
users current location:
1

MKMapItem *source = [MKMapItem mapItemForCurrentLocation];

When the user fires up the app for the first time they will then be asked for permission to use their current
location:

82

Day 13: Route Directions with MapKit

Allow location

You can also create a map item using a specific location using the initWithPlacemark: method, which brings
us on to another MapKit class. MKPlacemark represents the actual location on a map - i.e. its latitude and
longitude. We could use a reverse geo-coder from CoreLocation to generate a placemark, but since thats not
the point of this post, were going to create a placemark for some fixed coordinates. Putting all this together
we can complete setting up our MKDirectionsRequest object.
1
2
3
4
5
6
7
8
9

// Make the destination


CLLocationCoordinate2D destinationCoords = CLLocationCoordinate2DMake(38.8977, -77.0365);
MKPlacemark *destinationPlacemark = [[MKPlacemark alloc]
initWithCoordinate:destinationCoords
addressDictionary:nil];
MKMapItem *destination = [[MKMapItem alloc] initWithPlacemark:destinationPlacemark];
// Set the source and destination on the request
[directionsRequest setSource:source];
[directionsRequest setDestination:destination];

There are some other optional properties on a MKDirectionsRequest which can be used to control the route
were going to be sent back:

Day 13: Route Directions with MapKit

83

departureDate and arrivalDate. Setting these values will enable the returned routes to be optimized
for the time of day for travel - e.g. allowing for standard traffic conditions.
transportType. Currently Apple can provide either walking or driving directions using the enum values
MKDirectionsTransportTypeAutomobile or MKDirectionsTransportTypeWalking. The default value is
MKDirectionsTransportTypeAny.
requestsAlternateRoutes. If the routing server can find more than one reasonable route then setting
this property to YES will enable this. Otherwise it will just return one route.
Now that weve got a valid directions request we can send it off to get a route. This is done using the
MKDirections class - which has a constructor which takes a MKDirectionsRequest object:
1

MKDirections *directions = [[MKDirections alloc] initWithRequest:directionsRequest];

There are 2 methods which can be used: calculateETAWithCompletionHandler: estimates the time a route
will take, whereas calculateDirectionsWithCompletionHandler calculates the actual route. Both of these
methods are asynchronous, and take completion handling blocks. MKDirections objects also have a cancel
method, which does as suggested for any currently running requests, and a calculating property which is
true when there is currently a request in progress. A single MKDirections object can only run a single request
at once - additional requests will fail. If you want to run multiple simultaneous requests then you can have
more than one MKDirections object, but be aware that asking for too many might well result in receiving
throttling errors from Apples servers.
1
2
3
4

[directions calculateDirectionsWithCompletionHandler:^(MKDirectionsResponse *response,


NSError *error) {
// Handle the response here
}];

Directions Response
The response from Apples server is returned to us as a MKDirectionsResponse object, which as well as the
source and destination, contains an array of MKRoute objects. Note that this array will contain just one object
unless we set requestsAlternateRoutes to YES on the request.
MKRoute objects, as their name suggests, represents a route between two points which a user can follow. It

contains a set of properties with information about the route:


name: Generated by the route finding servers, this will be based on the routes significant features.
advisoryNotices: An array of strings which contain details of any warnings or the suchlike which are
appropriate to the generated route.
distance: Along the route itself - not direct. Measured in metres.
expectedTravelTime: An NSTimeInterval - i.e. measured in seconds.
transportType:
polyline: MKPolyline object which represents the path of the route as a line on the map. This can be
drawn on a MKMapView, and well look at doing this in the next section.

Day 13: Route Directions with MapKit

84

steps: An array of MKRouteStep objects which make up the route.


The other argument provided to our handler block is an NSError object, so we can use the following block to
handle the directions response:
1
2
3
4
5
6
7

[directions calculateDirectionsWithCompletionHandler:^(MKDirectionsResponse *response,


NSError *error) {
// Now handle the result
if (error) {
NSLog(@"There was an error getting your directions");
return;
}

// So there wasn't an error - let's plot those routes


_currentRoute = [response.routes firstObject];
[self plotRouteOnMap:_currentRoute];

9
10
11
12

}];

We have a created a utility method to plot a route on the map, which well take a look at in the next section.

Rendering a Polyline
Weve been sent a polyline of the route, which we want to plot on the map. iOS7 changes the way in which
we plot overlays on the map, with the introduction of a MKOverlayRenderer class. If we want to do custom
shapes, or a non-standard rendering technique then we can subclass to create our own renderer, however,
there are a set of overlay renderers for standard use cases. We want to render a polyline, so we can use the
MKPolylineRenderer. Well look in a second at when and where to create our renderer, but lets take a look
at the plotRouteOnMap: method we referred to in the previous section.
An MKPolyline is an object which represents a line made from multiple segments, and adopts the MKOverlay
protocol. This means that we can add it as an overlay to an MKMapView object, using the addOverlay: method:
1
2
3
4
5

- (void)plotRouteOnMap:(MKRoute *)route
{
if(_routeOverlay) {
[self.mapView removeOverlay:_routeOverlay];
}

// Update the ivar


_routeOverlay = route.polyline;

7
8
9

// Add it to the map


[self.mapView addOverlay:_routeOverlay];

10
11
12

Day 13: Route Directions with MapKit

85

This method takes an MKRoute object and adds the polyline of the route as an overlay to the MKMapView
referenced to by the mapView property. We have an ivar _routeOverlay which we use to keep a reference to
the polyline. This means that when the method is called we can remove an existing route, and replace it with
the new one instead.
Although weve now added the overlay to the map view, it wont yet be drawn. This is because the map
doesnt know how to draw this overlay object - and this is where the new MKOverlayRenderer class comes in.
When an overlay is present on a map view, the map view will ask its delegate for a renderer to draw it. Then,
as the user zooms and pans around the map the renderer will be asked to draw the overlay at in the different
map states.
We need to adopt the MKMapViewDelegate protocol, and implement the following method to provide the map
view with a renderer for our polyline:
1
2
3
4
5
6
7
8

- (MKOverlayRenderer *)mapView:(MKMapView *)mapView


rendererForOverlay:(id<MKOverlay>)overlay
{
MKPolylineRenderer *renderer = [[MKPolylineRenderer alloc] initWithPolyline:overlay];
renderer.strokeColor = [UIColor redColor];
renderer.lineWidth = 4.0;
return renderer;
}

Weve got a somewhat simplified situation here where we know that there will only be one overlay, and it will
be of type MKPolyline, and therefore dont require any code to decide what renderer to return. We create a
MKPolylineRenderer, which is a subclass of MKOverlayRenderer whose purpose is to draw polyline overlays.
We set some simple properties (strokeColor and lineWidth) so that we can see the overlay, and then return
the new object.
All that remains is setting the delegate property on the map view so that this delegate method is called when
the overlay is added to the map:
1
2
3
4
5
6

- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
self.mapView.delegate = self;
}

86

Day 13: Route Directions with MapKit

Polyline Overlay

Route steps
As well as the polyline representing the route, were also provided with an array of MKRouteStep objects which form the turn-by-turn directions a user should follow to travel along the route. MKRouteStep objects
have the following properties:
polyline: Much the same as the route has a polyline, each step has a line which can be used to show
this section of the route on a map.
instructions: A string which gives the details of what the user should do to follow this section of the
route.
notice: Any useful information regarding this section of the route.
distance: Measured in metres.
transportType: Its not unreasonable that routes comprise multiple modes of transport, so each step
should have its own transport type.
In the RouteMaster app accompanying todays post we populate a table view with the list of steps, and then
show a new map view with the map for the section when requested.

Day 13: Route Directions with MapKit

87

Building RouteMaster
Weve now discussed the process we used to request directions and the response we get, but not given many
details about the app which accompanies todays post. Even though it doesnt really demonstrate any further
details of MapKit, its worth having a quick look at how the app is constructed.
This app isnt especially useful since it only determines the route from your current location to the White
House in Washington DC. The app is built using a storyboard, and is based around a navigation controller.
The following are the view controllers which make up the app:
SCViewController. Main screen. Allows the user to kick off the routing request and when a response is
received plots the entire route on the embedded map view. It contains a button (which appears when a
route has been received) to view the route details. This pushes the next view controller onto the stack.
SCStepsViewController. This is a UITableViewController, which displays a cell for each of the steps
in the route. Selecting one of these cells will push the final view controller onto the stack:
SCIndividualStepViewController. This displays the details of a specific step, including a map, the
distance, and the instructions provided by the routing server.
Since were using storyboards, we override the prepareForSegue:sender: method in each of our view
controllers to provide the next view controller with the data it needs to display. For example, we set the
route property (of type MKRoute) of SCStepsViewController which we set as we segue from the main view
controller:

Day 13: Route Directions with MapKit

1
2
3
4
5
6
7
8

88

- (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender


{
if ([segue.destinationViewController isKindOfClass:[SCStepsViewController class]]) {
SCStepsViewController *vc =
(SCStepsViewController *)segue.destinationViewController;
vc.route = _currentRoute;
}
}

Similarly, the SCIndividualStepViewController has a routeStep property (of type MKRouteStep), which we
set as we transition from the table of steps:
1
2
3
4
5
6
7

- (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender


{
if ([[segue destinationViewController]
isKindOfClass:[SCIndividualStepViewController class]]) {
SCIndividualStepViewController *vc =
(SCIndividualStepViewController *)[segue destinationViewController];
NSIndexPath *selectedRow = [self.tableView indexPathForSelectedRow];

// If we have a selected row then set the step appropriately


if(selectedRow) {
vc.routeStep = self.route.steps[selectedRow.row];
vc.stepIndex = selectedRow.row;
}

9
10
11
12
13

14
15

Since the individual step view controller contains an MKMapView we add the polyline as an overlay in exactly
the same we did for the main view controller.
The rest of the app is pretty self-explanatory, and if you run it up you should be provided with the best route
from your current location (or simulated equivalent in the simulator) to the White House. You can change the
simulated location in the Debug menu in Xcode, although it only seems to be possible to get routing results
for a start location within the continental US (seems reasonable - driving across the Atlantic isnt that easy).

89

Day 13: Route Directions with MapKit

Simulate Location

Maybe not the most useful app, but with a sprinkling of CoreLocation, you could make your own directions
app without too much difficulty.

Conclusion
MapKit is starting to mature a little in iOS7 with the addition of some really useful APIs. The directions API
is fairly easy to use, despite the plethora of different classes, and returns results which are really easy to work
with in an app. All we need now is the constant improvement in Apples mapping back-end to continue so
that the results we provide to users are sensible.

Day 14: Interactive View Controller


Transitions
Introduction
Back on day 10 of DbD we looked at how to create custom view controller transitions by creating a fade
transition for a navigation controller. Interactive view controller transitions add another dimension to this,
allowing the transition to be controlled interactively, usually with gestures.
Todays post is going to take a look at how to create a interactive view transition for a modal view controller
which will look like a card flip. As the user pans down the view the card flip animation will follow the users
finger.

Flip Transition Animation


Interactive transitions augment custom animations, and therefore we need to start out by creating a custom animation the same we we did for the fader: i.e. an object which adopts UIViewControllerAnimatedTransitioning
protocol.
1
2
3

@interface SCFlipAnimation : NSObject <UIViewControllerAnimatedTransitioning>


@property (nonatomic, assign) BOOL dismissal;
@end

We define a dismissal property which is going to determine which direction the card flip will go in.
As before we need to implement 2 methods:
1
2
3
4
5
6
7

- (void)animateTransition:(id<UIViewControllerContextTransitioning>)transitionContext
{
// Get the respective view controllers
UIViewController *fromVC = [transitionContext
viewControllerForKey:UITransitionContextFromViewControllerKey];
UIViewController *toVC = [transitionContext
viewControllerForKey:UITransitionContextToViewControllerKey];

8
9
10
11
12
13

// Get
UIView
UIView
UIView

the views
*containerView = [transitionContext containerView];
*fromView = fromVC.view;
*toView = toVC.view;

Day 14: Interactive View Controller Transitions

91

// Add the toView to the container


[containerView addSubview:toView];

14
15
16

// Set the frames


CGRect initialFrame = [transitionContext initialFrameForViewController:fromVC];
fromView.frame = initialFrame;
toView.frame = initialFrame;

17
18
19
20
21

// Start building the transform - 3D so need perspective


CATransform3D transform = CATransform3DIdentity;
transform.m34 = -1/CGRectGetHeight(initialFrame);
containerView.layer.sublayerTransform = transform;

22
23
24
25
26

CGFloat direction = self.dismissal ? -1.0 : 1.0;

27
28

toView.layer.transform = CATransform3DMakeRotation(-direction * M_PI_2, 1, 0, 0);


[UIView animateKeyframesWithDuration:[self transitionDuration:transitionContext]
delay:0.0
options:0
animations:^{
// First half is rotating in
[UIView addKeyframeWithRelativeStartTime:0.0
relativeDuration:0.5
animations:^{
fromView.layer.transform = CATransform3DMakeRotation(direction * M_PI_2, 1, 0, 0);
}];
[UIView addKeyframeWithRelativeStartTime:0.5
relativeDuration:0.5
animations:^{
toView.layer.transform = CATransform3DMakeRotation(0, 1, 0, 0);
}];
} completion:^(BOOL finished) {
[transitionContext completeTransition:
![transitionContext transitionWasCancelled]];
}];

29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49

50
51
52
53
54
55

- (NSTimeInterval)transitionDuration:
(id<UIViewControllerContextTransitioning>)transitionContext
{
return 1.0;
}

The animation method looks quite complicated, but in reality it just uses the new UIView keyframe animations
we looked at on day 11. The important part to note is that the dismissal property is used to determine in

Day 14: Interactive View Controller Transitions

92

which direction the rotation will be performed. Other than that, the animation is pretty straight forward, and
we wont go into detail here. For more information check out custom view controller transitions on day 10
and UIView key-frame animations on day 11.
Now that we have an animation object we have to wire it into our view controller transitions. We have created
a storyboard which contains 2 view controllers. The first contains a button which triggers a segue to present
the modal view controller, and the second contains a button which dismisses the modal view controller via
the following method:
1
2
3

- (IBAction)handleDismissPressed:(id)sender {
[self dismissViewControllerAnimated:YES completion:NULL];
}

If we run up the app now then we can see the standard transition animation to present and dismiss a modal
view controller. There is a standard flip transition which would could use, but were interested in using custom
animations, so lets add our custom transition animation.

In day 10 we implemented the UINavigationControllerDelegate protocol because we were controlling the


transitions of a navigation controller. Here were controlling the transitions associated with a modal view controller, so instead need to implement the UIViewControllerTransitioningDelegate protocol. This has similar
methods, we need animationControllerForPresentedController:presentingController:sourceController:
and animationControllerForDismissedController:. We implement these methods on the primary view
controller, and return the animation object we created above:

Day 14: Interactive View Controller Transitions

1
2
3
4

93

@interface SCViewController () <UIViewControllerTransitioningDelegate> {


SCFlipAnimation *_flipAnimation;
}
@end

5
6

@implementation SCViewController

7
8
9
10
11
12
13

- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
_flipAnimation = [SCFlipAnimation new];
}

14
15
16
17
18
19
20
21
22

- (id<UIViewControllerAnimatedTransitioning>)
animationControllerForPresentedController:(UIViewController *)presented
presentingController:(UIViewController *)presenting
sourceController:(UIViewController *)source
{
_flipAnimation.dismissal = NO;
return _flipAnimation;
}

23
24
25
26
27
28
29

- (id<UIViewControllerAnimatedTransitioning>)
animationControllerForDismissedController:(UIViewController *)dismissed
{
_flipAnimation.dismissal = YES;
return _flipAnimation;
}

Its important to note that the difference between the present and dismiss methods is the setting of the
dismissal property on the animation - which determines which direction the flip will take. All that is left
to do is to set this as the transitioning delegate on the appropriate view controller. Since were talking about
presenting and dismissing a view controller, these methods both refer to the modal view controller, and so
the delegate must be set on this controller. Since the modal view controller is being created by the storyboard
segue process, we can set this in the prepareForSegue:sender: method:

Day 14: Interactive View Controller Transitions

1
2
3
4
5
6
7
8
9

94

- (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender


{
if([segue.destinationViewController isKindOfClass:[SCModalViewController class]]) {
// Set the delegate
SCModalViewController *vc =
(SCModalViewController *)segue.destinationViewController;
vc.transitioningDelegate = self;
}
}

If you run the app up now, then you should see that weve replaced the original slide animation with our
custom vertical card-flip animation.

Interactive transitioning
There are 2 more methods on the UIViewControllerTransitioningDelegate protocol, both of which return
an object which implements the UIViewControllerInteractiveTransitioning protocol, which are provided
to support interactive transitioning. We could go ahead an create an object which implements this ourselves,
but Apple has provided a concrete class in the form of UIPercentDrivenInteractiveTransition which covers
the majority of use cases.
The concept of an interactor (i.e. an object whichs conform to UIViewControllerInteractiveTransitioning)
is that it controls the progress of an animation (which is provided by an object conforming to the
UIViewControllerAnimatedTransitioning protocol). The UIPercentDrivenInteractiveTransition class
provides methods to enable specifying the current progress of the animation as a percentage, as well as
cancelling and completing the animation.
This will all become a lot clearer once we see how it all fits with our project. We want to create a pan
gesture, which as the user drags vertically, will control the transition of presenting/dismissing the modal
view controller. Well create a subclass of UIPercentDrivenInteractiveTransition which has the following
properties:

Day 14: Interactive View Controller Transitions

95

@interface SCFlipAnimationInteractor : UIPercentDrivenInteractiveTransition

2
3
4
5
6

@property (nonatomic, strong, readonly) UIPanGestureRecognizer *gestureRecogniser;


@property (nonatomic, assign, readonly) BOOL interactionInProgress;
@property (nonatomic, weak) UIViewController
<SCInteractiveTransitionViewControllerDelegate> *presentingVC;

7
8

@end

The gesture recognizer is as weve already discussed, we also provide a property for determining whether or
not an interaction is in progress, and finally a property which specifies the presenting view controller. Well
see why we need this later on, but for now we need to observe that it adopts the following simple protocol:
1
2
3

@protocol SCInteractiveTransitionViewControllerDelegate <NSObject>


- (void)proceedToNextViewController;
@end

On to the implementation of this subclass:


1
2
3
4

@interface SCFlipAnimationInteractor ()
@property (nonatomic, strong, readwrite) UIPanGestureRecognizer *gestureRecogniser;
@property (nonatomic, assign, readwrite) BOOL interactionInProgress;
@end

5
6
7
8
9
10
11
12
13
14
15
16

@implementation SCFlipAnimationInteractor
- (instancetype)init
{
self = [super init];
if (self) {
self.gestureRecogniser = [[UIPanGestureRecognizer alloc]
initWithTarget:self action:@selector(handlePan:)];
}
return self;
}
@end

Firstly we need to redefine 2 of the properties as internally read-write, and at construction time we create the
gesture recognizer and set its target to an internal method. Notice that we dont attach it to any views at this
stage - we have provided this as a property so that we can do this externally.
The pan handling method is as follows:

Day 14: Interactive View Controller Transitions

1
2
3
4
5
6
7
8
9

96

- (void)handlePan:(UIPanGestureRecognizer *)pgr
{
CGPoint translation = [pgr translationInView:pgr.view];
CGFloat percentage = fabs(translation.y / CGRectGetHeight(pgr.view.bounds));
switch (pgr.state) {
case UIGestureRecognizerStateBegan:
self.interactionInProgress = YES;
[self.presentingVC proceedToNextViewController];
break;

10

case UIGestureRecognizerStateChanged: {
[self updateInteractiveTransition:percentage];
break;
}

11
12
13
14
15

case UIGestureRecognizerStateEnded:
if(percentage < 0.5) {
[self cancelInteractiveTransition];
} else {
[self finishInteractiveTransition];
}
self.interactionInProgress = NO;
break;

16
17
18
19
20
21
22
23
24

case UIGestureRecognizerStateCancelled:
[self cancelInteractiveTransition];
self.interactionInProgress = NO;

25
26
27
28

default:
break;

29
30

31
32

This is a fairly standard gesture recognizer handling method, with cases for the different recognizer states.
Before we start the switch we calculate the percentage complete - i.e. given how far the gesture has travelled,
how complete do we consider the transition to be. Then the switch behaves as follows:
Began Here we set that the interaction is currently in progress, and use the method we added to our
presentingViewController to begin the transition. This is important - were using the gesture to begin
the transition. The interactor isnt currently being used other than for handling the gesture because
there is no transition occurring. Once weve called this method on the view controller (provided we
implement it correctly) a transition will begin and the interactor will begin performing its animation
control job.
Changed We must now be in the middle of an interactive transition (since we started one when the
gesture began) and therefore we just call the method provided by our superclass to specify how complete

Day 14: Interactive View Controller Transitions

97

our transition is updateInteractiveTransition:. This will set the current transition appearance to be
as if the animation is the specified proportion complete.
Ended When a gesture ends we need to decide whether or now we should finish the transition or cancel it. We call the helper methods provided by the superclass to cancel the transition
(cancelInteractiveTransition) if the percentage is lower than 0.5 and complete the transition
(finishInteractiveTransition) otherwise. We also need to update our in-progress property since the
transition is finished.
Canceled If canceled then we should cancel the transition and update the interactionInProgress
property.
That completes all the code that we need in the interactor - all that remains is to wire it all up.
Firstly lets add the new methods for interactive transitions on the UIViewControllerTransitioningDelegate,
which is our primary view controller:
1
2
3
4
5

- (id<UIViewControllerInteractiveTransitioning>)interactionControllerForPresentation:
(id<UIViewControllerAnimatedTransitioning>)animator
{
return _animationInteractor.interactionInProgress ? _animationInteractor : nil;
}

6
7
8
9
10
11

- (id<UIViewControllerInteractiveTransitioning>)interactionControllerForDismissal:
(id<UIViewControllerAnimatedTransitioning>)animator
{
return _animationInteractor.interactionInProgress ? _animationInteractor : nil;
}

These are both identical (for presentation and dismissal). We only want to return an interactor if were
performing an interactive transition - i.e. if a user clicked on the button rather than by panning then we
should perform a non-interactive transition. This is the purpose of the interactionInProgress property on
our interactor. Were returning an ivar _animationInteractor here, which we set up in viewDidLoad:
1
2
3
4
5
6
7

- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
_animationInteractor = [SCFlipAnimationInteractor new];
_flipAnimation = [SCFlipAnimation new];
}

When we created the gesture recognizer in the interactor, we didnt actually add it to a view, so we can do
that now, in our view controllers viewDidAppear:

Day 14: Interactive View Controller Transitions

1
2
3
4
5
6
7
8

98

- (void)viewDidAppear:(BOOL)animated
{
// Add the gesture recogniser to the window first render time
if (![self.view.window.gestureRecognizers
containsObject:_animationInteractor.gestureRecogniser]) {
[self.view.window addGestureRecognizer:_animationInteractor.gestureRecogniser];
}
}

We normally add gesture recognizers to views, but here were adding it to the window object instead. This
is because as the animation occurs, the view controllers view will move, and hence the gesture recognizer
wont behave as expected. Adding it to the window instead will ensure the behavior we expect. If we were
performing a navigation controller transition instead we could add the gesture to the navigation controllers
view. The gesture recognizer is added in viewDidAppear: since at this point the window property is set correctly.
The final piece of the puzzle is to set the presentingVC property on the interactor. In order to do this we need
to make our view controllers implement the SCInteractiveTransitionViewControllerDelegate protocol.
On our main view controller this is pretty simple:
1
2
3
4
5
6

@interface SCViewController () <SCInteractiveTransitionViewControllerDelegate,


UIViewControllerTransitioningDelegate> {
SCFlipAnimationInteractor *_animationInteractor;
SCFlipAnimation *_flipAnimation;
}
@end

7
8
9
10
11
12

#pragma mark - SCInteractiveTransitionViewControllerDelegate methods


- (void)proceedToNextViewController
{
[self performSegueWithIdentifier:@"displayModal" sender:self];
}

And now we have implemented the required method we can set the correct property on the interactor in the
viewDidAppear. This will ensure that it is set correctly every time the primary view controller is displayed,
whether it be on the first display or when the modal view controller is dismissed:
1
2
3
4
5
6

- (void)viewDidAppear:(BOOL)animated
{
...
// Set the recipeint of the interactor
_animationInteractor.presentingVC = self;
}

So, when the user starts the pan gesture, the interactor will call proceedToNextViewController on the primary
view controller, which will kick off the segue to present the modal view controller - this is exactly what we
want!

Day 14: Interactive View Controller Transitions

99

To perform the same operation on the modal view controller it must have a reference to the interactor as well
(so that it can update the presentingVC property):
1
2

@interface SCModalViewController : UIViewController


<SCInteractiveTransitionViewControllerDelegate>

3
4

...

5
6

@property (nonatomic, weak) SCFlipAnimationInteractor *interactor;

7
8

@end

We set this property in the prepareForSegue: method on the main view controller:
1
2
3
4
5
6
7
8

- (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender


{
if([segue.destinationViewController isKindOfClass:[SCModalViewController class]]) {
// Set the delegate
...
vc.interactor = _animationInteractor;
}
}

The SCInteractiveTransitionViewControllerDelegate is adopted by creating an implementation of the


proceedToNextViewController method:
1
2
3
4

- (void)proceedToNextViewController
{
[self dismissViewControllerAnimated:YES completion:NULL];
}

And finally, once the modal view controller has appeared then we need to update the property on the interactor
to make sure that the next time an interactive transition is started (i.e. the user begins a vertical pan) it calls
the method on the modal VC, not the main one:
1
2
3
4
5
6

- (void)viewDidAppear:(BOOL)animated
{
// Reset which view controller should be the receipient of the
// interactor's transition
self.interactor.presentingVC = self;
}

And thats it. If you run the app up now and drag vertically youll see that the transition to show the modal
view controller will follow your finger. If you drag further than half way and let go then the transition will
complete, otherwise it will return to its original state.

Day 14: Interactive View Controller Transitions

100

Conclusion
Interactive view controller transitions can appear to be quite a complicated topic - primarily due to the
vast array of different protocols that you need to implement, and also because its not immediately obvious
which bits pieces of the puzzle should be responsible for what (e.g. who should own the gesture recogniser?).
However, in reality, weve got some really quite powerful functionality for a small amount of code. I encourage
you to give this custom view controller transitions a try, but be aware, with great power comes great
responsibility - just because we can now do lots of whacky transitions between view controllers we should
ensure that we dont overcomplicate the UX for our app users.

Day 15: CoreImage Filters


Introduction
CoreImage is a framework for image processing which was introduced in iOS5. It abstracts all the low-level
guff associated with dealing with images away from the user and has an easy-to-use filter-chain architecture.
iOS7 introduces new filters, some of which were going to take a look at in todays DbD. Well start by taking a
look at some more traditional photo effect filters, before taking a look at a new creative filter which generates
QR codes.

Photo Effect Filters


The ability to apply cool effects to your photos is now ever-present in the mobile app world. Made popular
by instagram, it seems that its no longer possible to take a photo without being encouraged to make it appear
that you took it on a 40 year old camera which has a light-leak. Well, CoreImage has added some really easy
to use filters to help you out with adding this functionality to your apps.
In order to use these filters well need to have do a bit of CoreImage. CoreImage specifies its own image type CIImage, which can be created from lots of different sources, including the CoreGraphics equivalent CGImage:
1
2

UIImage *_inputUIImage = [UIImage imageNamed:@"shinobi-badge-head.jpg"];


CIImage *_inputImage = [CIImage imageWithCGImage:[_inputUIImage CGImage]];

Using filters is really simple - they can even be chained together, but for our purposes we just want to specify
a single filter:
1
2

CIFilter *filter = [CIFilter filterWithName:@"CIPhotoEffectChrome"];


[filter setValue:_inputImage forKey:kCIInputImageKey];

A CoreImage filter is represented by the CIFilter class, which has a factory method to create a specific
filter object. These filter object then use KVC to specify the relevant filter arguments. All of the new photoeffect filters take just a single argument - the input image, which is specified using the string constand
kCIInputImageKey.
We can then turn this back into a UIImage for display in a UIImageView:
1

UIImage *outputImage = [UIImage imageWithCIImage:filter.outputImage];

The new photo-effect filters are referenced with the following strings:

Day 15: CoreImage Filters

1
2
3
4
5
6
7
8

102

@"CIPhotoEffectChrome"
@"CIPhotoEffectFade"
@"CIPhotoEffectInstant",
@"CIPhotoEffectMono"
@"CIPhotoEffectNoir"
@"CIPhotoEffectProcess"
@"CIPhotoEffectTonal"
@"CIPhotoEffectTransfer"

In the app which accompanies todays post we have a collection view which demonstrates the output of each
of the new filters on a single input image. Since we dont have loads of images, we process the images up-front,
to preserve the scrolling performance we expect from iOS.
This also requires that we construct CGImage versions of each of the CIImage filter outputs. This is because
the outputImage property is generated lazily. To do this, we use a CIContext to draw the CIImage into a
CoreGraphics context:
1
2
3
4
5
6

// Create a CG-back UIImage


CGImageRef cgImage = [[CIContext contextWithOptions:nil]
createCGImage:filter.outputImage
fromRect:filter.outputImage.extent];
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);

7
8

[images addObject:image];

The rest of the code in the SCPhotoFiltersViewController is the boilerplate code required to run a collection
view with custom cells. If you run up the app you can see the different filtered results:

Day 15: CoreImage Filters

103

QR Code Generation
In addition to the photo effect filters iOS7 also introduces a filter which is capable of generating QR
codes to represent a specific data object. In the sample app the second tab (SCQRGeneratorViewController)
demonstrates this functionality - when the Generate button is pressed then the content of the text field is
encoded in a QR code, displayed above.
The method which creates the QR code is really rather simple:
1
2
3
4

- (CIImage *)createQRForString:(NSString *)qrString


{
// Need to convert the string to a UTF-8 encoded NSData object
NSData *stringData = [qrString dataUsingEncoding:NSUTF8StringEncoding];

5
6
7
8
9

// Create the filter


CIFilter *qrFilter = [CIFilter filterWithName:@"CIQRCodeGenerator"];
// Set the message content and error-correction level
[qrFilter setValue:stringData forKey:@"inputMessage"];

Day 15: CoreImage Filters

104

[qrFilter setValue:@"H" forKey:@"inputCorrectionLevel"];

10
11

// Send the image back


return qrFilter.outputImage;

12
13
14

The QR filter requires an NSData object which it will encode, and hence we first take the NSString and encode
it into an NSData object using UTF-8 encoding.
Then, same as we did before, we create a CIFilter using the filterWithName: factory method, specifying
the name to be CIQRCodeGenerator. The two keys we need to set in this case are called inputMessage, which
is the NSData object we just created, and inputCorrectionLevel, which specifies how resilient to error the
code will be. There are 4 levels:

L 7% error resilience
M 15% error resilience
Q 25% error resilience
H 30% error resilience

Once weve done this we can return the outputImage of the filter, which will be a CIImage with 1pt resolution
for the smallest squares.
We want to be able to resize this image, but we dont want to allow any interpolation since what we have is
pixel-perfect. In order to do this we create a new method which enables rescaling an image with interpolation
disabled:
1
2
3
4
5
6

- (UIImage *)createNonInterpolatedUIImageFromCIImage:(CIImage *)image


withScale:(CGFloat)scale
{
// Render the CIImage into a CGImage
CGImageRef cgImage = [[CIContext contextWithOptions:nil] createCGImage:image
fromRect:image.extent];

// Now we'll rescale using CoreGraphics


UIGraphicsBeginImageContext(CGSizeMake(image.extent.size.width * scale,
image.extent.size.width * scale));
CGContextRef context = UIGraphicsGetCurrentContext();
// We don't want to interpolate (since we've got a pixel-correct image)
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
CGContextDrawImage(context, CGContextGetClipBoundingBox(context), cgImage);
// Get the image out
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Tidy up
UIGraphicsEndImageContext();
CGImageRelease(cgImage);
return scaledImage;

8
9
10
11
12
13
14
15
16
17
18
19
20
21

Day 15: CoreImage Filters

105

Like we did in the previous example, we first create a CGImage representation of the CIImage. Then we
create a core graphics context of the correctly rescaled resolution. The important line here is that we set
the interpolation quality to none. If we were rescaling a photo, this would look pretty terrible and pixelized,
but pixelized is exactly what we want for a QR code:
1

CGContextSetInterpolationQuality(context, kCGInterpolationNone);

Once weve drawn the image into the context then we can grab it out as a UIImage and return it. Thus, our
completed generation handler looks like this:
1
2
3
4

- (IBAction)handleGenerateButtonPressed:(id)sender {
// Disable the UI
[self setUIElementsAsEnabled:NO];
[self.stringTextField resignFirstResponder];

// Get the string


NSString *stringToEncode = self.stringTextField.text;

6
7
8

// Generate the image


CIImage *qrCode = [self createQRForString:stringToEncode];

9
10
11

// Convert to an UIImage
UIImage *qrCodeImg = [self createNonInterpolatedUIImageFromCIImage:qrCode
withScale:2*[[UIScreen mainScreen] scale]];

12
13
14
15

// And push the image on to the screen


self.qrImageView.image = qrCodeImg;

16
17
18

// Re-enable the UI
[self setUIElementsAsEnabled:YES];

19
20
21

Theres a call to a utility method to disable the UI whilst were generating:


1
2
3
4
5

- (void)setUIElementsAsEnabled:(BOOL)enabled
{
self.generateButton.enabled = enabled;
self.stringTextField.enabled = enabled;
}

If you run the app up now youll be able to generate QR codes all day and night. No idea what youre going
to do with them maybe soon well work out a way to read them.

106

Day 15: CoreImage Filters

QR Generator

Conclusion
CoreImage is a handy framework for doing some fairly advanced image processing without having to get too
involved with the low-level image manipulation. It has its quirks, but it can be really useful. With the new
photo-effect filters and QR code generator it might just have saved you finding an external dependency or
writing your own versions.

Day 16: Decoding QR Codes with


AVFoundation
Introduction
Yesterday we looked at some of the new filters available in CoreImage, and discovered that in iOS7 we now
have the ability to generate QR codes. Well, given that we can create them you might imagine that it would
be helpful to be able to decode them as well, and you arent about to be disappointed. In the 17th installment
of DbD were going to take a look at how to use some new features in the AVFoundation framework to decode
(amongst other things) QR codes.

AVFoundation pipeline
AVFoundation is a large framework which facilitates creating, editing, display and capture of multimedia. This
post isnt meant to be an introduction to AVFoundation, but well cover the basics of getting a live feed from
the camera to appear on the screen, since its this well use to extract QR codes. In order to use AVFoundation
we need to import the framework:
1

@import AVFoundation;

When capturing media, we use the AVCaptureSession class as the core of our pipeline. We then need to add
inputs and outputs to complete the session. Well set this up in the viewDidLoad method of our view controller.
Firstly, create a session:
1

AVCaptureSession *session = [[AVCaptureSession alloc] init];

We need to add the main camera as an input to this session. An input is a AVCaptureDeviceInput object,
which is created from a AVCaptureDevice object:
1
2

AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];


NSError *error = nil;

3
4
5

AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device


error:&error];

6
7
8
9

if(input) {
// Add the input to the session
[session addInput:input];

Day 16: Decoding QR Codes with AVFoundation

10
11
12
13

108

} else {
NSLog(@"error: %@", error);
return;
}

Here we get a reference to the default video input device, which will be the rear camera on devices with
multiple cameras. Then we create an AVCaptureDeviceInput object using the device, and then add it to the
session.
In order to get the video to appear on the screen we need to create a AVCaptureVideoPreviewLayer. This is a
CALayer subclass, which, when added to a session will display the current video output of the session. Given
that we have an ivar called _previewLayer of type AVCaptureVideoPreviewLayer:
1
2
3
4
5
6

_previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];


_previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
_previewLayer.bounds = self.view.bounds;
_previewLayer.position = CGPointMake(CGRectGetMidX(self.view.bounds),
CGRectGetMidY(self.view.bounds));
[self.view.layer addSublayer:_previewLayer];

The videoGravity property is used to specify how the video should appear within the bounds of the layer.
Since the aspect-ratio of the video is not equal to that of the screen, we want to chop off the edges of the video
so that it appears to fill the entire screen, hence the use of AVLayerVideoGravityResizeAspectFill. We add
this layer as a sublayer of the views layer.
Now this is set up, all that remains is to start the session:
1
2

// Start the AVSession running


[session startRunning];

If you run the app up now (on a device) then youll be able to see the cameras output on the screen - magic.

109

Day 16: Decoding QR Codes with AVFoundation

Preview Layer

Capturing metadata
Youve been able to do what weve achieved so far since iOS5, but in this section were going to do some stuff
which has only been possible since iOS7.
An AVCaptureSession can have AVCaptureOutput objects attached to it, forming the end points of the AV
pipeline. The AVCaptureOutput subclass were interested in here is AVCaptureMetadataOutput, which detects
any metadata from the video content and outputs it. The output of this class isnt of the form of image or
video, but instead metadata objects which have been extracted from the video feed itself. Setting this up is as
follows:
1
2
3
4
5

*output = [[AVCaptureMetadataOutput alloc] init];


// Have to add the output before setting metadata types
[session addOutput:output];
// What different things can we register to recognise?
NSLog(@"%@", [output availableMetadataObjectTypes]);

Day 16: Decoding QR Codes with AVFoundation

110

Here, weve created a metadata output object, and added it as an output to the session. Then weve using a
method provided to log out a list of the different metadata types we can register to be informed about:
1
2
3
4
5
6
7
8
9
10
11
12

2013-10-09 11:10:26.085 CodeScanner[6277:60b] (


"org.gs1.UPC-E",
"org.iso.Code39",
"org.iso.Code39Mod43",
"org.gs1.EAN-13",
"org.gs1.EAN-8",
"com.intermec.Code93",
"org.iso.Code128",
"org.iso.PDF417",
"org.iso.QRCode",
"org.iso.Aztec"
)

Its important to note that we have to add our metadata output object to the session before attempting this,
since the available types depend on the input device. We can see above that we can register to detect QR
codes, so lets do that:
1
2

// We're only interested in QR Codes


[output setMetadataObjectTypes:@[AVMetadataObjectTypeQRCode]];

This is an array, so you can specify as many of the different metadata types as you wish.
When the metadata output object finds something within the video stream for which it can generate metadata
it tells its delegate, so we need to set the delegate:
1
2

// This VC is the delegate. Please call us on the main queue


[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];

Since AVFoundation is designed to allow threaded operation, we specify which queue we want the delegate
to be called on as well.
The delegate protocol method we need to adopt is AVCaptureMetadataOutputObjectsDelegate:
1
2
3
4
5

@interface SCViewController () <AVCaptureMetadataOutputObjectsDelegate> {


AVCaptureVideoPreviewLayer *_previewLayer;
UILabel *_decodedMessage;
}
@end

And the method we need to implement is captureOutput:didOutputMetadataObjects:fromConnection::

Day 16: Decoding QR Codes with AVFoundation

1
2
3
4
5
6
7
8
9
10
11
12
13
14

111

#pragma mark - AVCaptureMetadataOutputObjectsDelegate


- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputMetadataObjects:(NSArray *)metadataObjects
fromConnection:(AVCaptureConnection *)connection
{
for (AVMetadataObject *metadata in metadataObjects) {
if ([metadata.type isEqualToString:AVMetadataObjectTypeQRCode]) {
AVMetadataMachineReadableCodeObject *transformed =
(AVMetadataMachineReadableCodeObject *)metadata;
// Update the view with the decoded text
_decodedMessage.text = [transformed stringValue];
}
}
}

The metadataObjects array consists of AVMetadataObject objects, which we inspect to find their type. Since
weve only registered to be notified of QR codes well be sent objects of type AVMetadataObjectTypeQRCode.
The AVMetadataMachineReadableCodeObject type has a stringValue property which contains the decoded
value of whatever metadata object has been detected. Here were pushing this string to be displayed in the
_decodedMessage label, which was created in viewDidLoad:
1
2
3
4
5
6
7
8
9

// Add a label to display the resultant message


_decodedMessage = [[UILabel alloc] initWithFrame:CGRectMake(0,
CGRectGetHeight(self.view.bounds) - 75,
CGRectGetWidth(self.view.bounds), 75)];
_decodedMessage.numberOfLines = 0;
_decodedMessage.backgroundColor = [UIColor colorWithWhite:0.8 alpha:0.9];
_decodedMessage.textColor = [UIColor darkGrayColor];
_decodedMessage.textAlignment = NSTextAlignmentCenter;
[self.view addSubview:_decodedMessage];

Running the app up now and pointing it at a QR code will cause the decoded string to appear at the bottom
of the screen:

112

Day 16: Decoding QR Codes with AVFoundation

Decoding

Drawing the code outline


In addition to providing the decoded text the metadata objects also contain a bounding box and the locations
of the corners of the detected QR code. Our scanner app would be a lot more intuitive if we displayed the
location of the detected code.
In order to do this we create a UIView subclass, which when provided with a sequence of points, will connect
the dots. This will become clear as we build it:
1

@interface SCShapeView : UIView

2
3

@property (nonatomic, strong) NSArray *corners;

4
5

@end

The corners array contains (boxed) CGPoint objects, each of which represents a corner of the shape we wish
to draw.
Were going to use a CAShapeLayer to draw the points, as this is an extremely efficient way of drawing shapes:

Day 16: Decoding QR Codes with AVFoundation

1
2
3
4

113

@interface SCShapeView () {
CAShapeLayer *_outline;
}
@end

5
6

@implementation SCShapeView

7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
_outline = [CAShapeLayer new];
_outline.strokeColor =
[[[UIColor blueColor] colorWithAlphaComponent:0.8] CGColor];
_outline.lineWidth = 2.0;
_outline.fillColor = [[UIColor clearColor] CGColor];
[self.layer addSublayer:_outline];
}
return self;
}
@end

Here we create a shape layer, set some appearance properties on it, and add it to the layer hierarchy. We are
yet to set the path of the shape - well do that in the setter for the corners property:
1
2
3
4
5
6
7

- (void)setCorners:(NSArray *)corners
{
if(corners != _corners) {
_corners = corners;
_outline.path = [[self createPathFromPoints:corners] CGPath];
}
}

This means that as the corners property is updated, the shape will be redrawn in its new position. Weve
used a utility method to create a UIBezierPath from an NSArray of boxed CGPoint objects:

Day 16: Decoding QR Codes with AVFoundation

1
2
3
4
5

114

- (UIBezierPath *)createPathFromPoints:(NSArray *)points


{
UIBezierPath *path = [UIBezierPath new];
// Start at the first corner
[path moveToPoint:[[points firstObject] CGPointValue]];

// Now draw lines around the corners


for (NSUInteger i = 1; i < [points count]; i++) {
[path addLineToPoint:[points[i] CGPointValue]];
}

7
8
9
10
11

// And join it back to the first corner


[path addLineToPoint:[[points firstObject] CGPointValue]];

12
13
14

return path;

15
16

This is fairly self-explanatory - just using the API of UIBezierPath to create a completed shape.
Now weve created this shape view, we need to use it in our view controller to show the detected QR code.
Lets create an ivar, and create the object in viewDidLoad:
1
2
3
4

_boundingBox = [[SCShapeView alloc] initWithFrame:self.view.bounds];


_boundingBox.backgroundColor = [UIColor clearColor];
_boundingBox.hidden = YES;
[self.view addSubview:_boundingBox];

Now we need to update this view in the metadata output delegate method:
1
2
3
4
5
6
7
8
9
10
11
12

// Transform the meta-data coordinates to screen coords


AVMetadataMachineReadableCodeObject *transformed =
(AVMetadataMachineReadableCodeObject *)[_previewLayer
transformedMetadataObjectForMetadataObject:metadata];
// Update the frame on the _boundingBox view, and show it
_boundingBox.frame = transformed.bounds;
_boundingBox.hidden = NO;
// Now convert the corners array into CGPoints in the coordinate system
// of the bounding box itself
NSArray *translatedCorners = [self translatePoints:transformed.corners
fromView:self.view
toView:_boundingBox];

13
14
15

// Set the corners array


_boundingBox.corners = translatedCorners;

Day 16: Decoding QR Codes with AVFoundation

115

AVFoundation uses a different coordinate system to that used by UIKit when rendering on the screen, so
the first part of this code snippet uses the transformedMetadataObjectForMetadataObject: method on
AVCaptureVideoPreviewLayer to translate the coordinate system from AVFoundation, to be in the coordinate
system of our preview layer.
Next we set the frame of our shape overlay to be the same as the bounding box of the detected code, and
update its visibility.
We now need to set the corners property on the shape view so that the overlay is positioned correctly, but
before we do that we need to change coordinate systems again.
The corners property on AVMetadataMachineReadableCodeObject is an NSArray of dictionary objects, each
of which have X and Y keys. Since we translated the coordinate systems, the values associated with the corners
refer to the video preview layer - but we want them to be in terms of our shape overlay. Therefore we use the
following utility method:
1
2
3
4
5

- (NSArray *)translatePoints:(NSArray *)points


fromView:(UIView *)fromView
toView:(UIView *)toView
{
NSMutableArray *translatedPoints = [NSMutableArray new];

// The points are provided in a dictionary with keys X and Y


for (NSDictionary *point in points) {
// Let's turn them into CGPoints
CGPoint pointValue = CGPointMake([point[@"X"] floatValue],
[point[@"Y"] floatValue]);
// Now translate from one view to the other
CGPoint translatedPoint = [fromView convertPoint:pointValue toView:toView];
// Box them up and add to the array
[translatedPoints addObject:[NSValue valueWithCGPoint:translatedPoint]];
}

7
8
9
10
11
12
13
14
15
16
17

return [translatedPoints copy];

18
19

Here we use convertPoint:toView: from UIView to change coordinate systems, and return an NSArray
containing NSValue boxed CGPoint objects instead of NSDictionary objects. We can then pass this to the
corners property of our shape view.
If you run the app up now youll see the bounding box of the code highlighted as well as the decoded message:

Day 16: Decoding QR Codes with AVFoundation

116

The final bits of code in the example app cause the decoded message and bounding box to disappear after a
certain amount of time. This prevents the box from staying on the screen when there are no QR codes present.
1
2
3
4
5
6

- (void)startOverlayHideTimer
{
// Cancel it if we're already running
if(_boxHideTimer) {
[_boxHideTimer invalidate];
}

// Restart it to hide the overlay when it fires


_boxHideTimer = [NSTimer scheduledTimerWithTimeInterval:0.2
target:self
selector:@selector(removeBoundingBox:)
userInfo:nil
repeats:NO];

8
9
10
11
12
13
14

Each time this method gets called it resets the timer, which when it finally gets fired will call the following

Day 16: Decoding QR Codes with AVFoundation

117

method:
1
2
3
4
5
6

- (void)removeBoundingBox:(id)sender
{
// Hide the box and remove the decoded text
_boundingBox.hidden = YES;
_decodedMessage.text = @"";
}

We call the timer method at the end of the delegate method:


1
2

// Start the timer which will hide the overlay


[self startOverlayHideTimer];

Conclusion
AVFoundation is a very complex and powerful framework, and in iOS7 it just got better. Detecting different
barcodes live used to be quite a difficult task on mobile devices, but with introductions of these new metadata
output types it is now really simple and efficient. Whether or not we should be using QR code is a different
question but at least its easy if we want to =)

Day 17: iBeacons


Introduction
Although not really mentioned in any great detail during the iOS7 unveiling keynote, a major addition is the
concept of iBeacons. These are a new feature of Bluetooth LE which allows proximity-based notifications
and ranging. Sample uses include notification that youre approaching a shop, and then showing a list of
special offers. Or maybe bringing up the receipt for an order you have purchased as you approach the cash
register. There is a long list of different possible uses, and Im sure well see some further creative uses over
the coming months.
Today were going to take a look at how to get an iOS device to acts as an iBeacon, and also how to use a
different device to estimate the distance to that iBeacon. Well create an app based on a Hot/Cold hide and
seek game, where the iBeacon device can be hidden, and the seeker device displays updates of estimated range
to it.

Create a beacon
To make an app act like an iBeacon we use CoreLocation to create the beacon properties, and then ask
CoreBluetooth to broadcast them appropriately.
iBeacons have several properties used to identify it uniquely.
proximityUUID. This is a NSUUID object which identifies your companys beacons. You can have may
beacons with the same uuid, and set CoreLocation to notify you whenever one comes into range.
major. An NSNumber representing the major ID of this particular beacon. This could identify a particular
store, or floor within a store. The number is represented as a 16-bit unsigned integer.
minor. Another NSNumber which represents the individual beacon.
Its possible to set CoreLocation to notify at any of the 3 possible granularities of iBeacon ID - i.e. notify
whenever any iBeacon with the same UUID is in range, or with the same UUID and major ID, or require a
specific beacon - with uuid, major and minor ids all matching.
We need to include both CoreLocation and CoreBluetooth for this project:
1
2

@import CoreBluetooth;
@import CoreLocation;

In order to make an app appear as a beacon, we create a CLBeaconRegion object, specifying IDs we require.
In our case we will only set the UUID:

Day 17: iBeacons

1
2
3

119

_rangedRegion = [[CLBeaconRegion alloc]


initWithProximityUUID:_beaconUUID
identifier:@"com.shinobicontrols.HotOrCold"];

The UUID was created as per:


1

_beaconUUID = [[NSUUID alloc] initWithUUIDString:@"3B2DCB64-A300-4F62-8A11-F6E7A06E4BC0"];

We can create a UUIDString using the OSX uuidgen tool:


1
2

17-ibeacons git:(days/17-ibeacons) uuidgen


874D949F-3325-4B3F-A6F4-AB5BBCE440F6

Well also need to create a CoreBluetooth peripheral manager:


1
2
3

_cbPeripheralManager = [[CBPeripheralManager alloc]


initWithDelegate:self
queue:dispatch_get_main_queue()];

A CBPeripheralManager has to have a delegate set (even though we wont be using it in this example), and it
has a required method:
1
2
3
4
5

#pragma mark - CBPeripheralManager delegate methods


- (void)peripheralManagerDidUpdateState:(CBPeripheralManager *)peripheral
{
// We don't really care...
}

Now, when we want to start broadcasting as an iBeacon then we get hold of a dictionary of settings from the
CLBeaconRegion and pass it to the CBPeripheralManager to begin broadcast:
1
2
3
4
5

- (IBAction)handleHidingButtonPressed:(id)sender {
if(_cbPeripheralManager.state < CBPeripheralManagerStatePoweredOn) {
NSLog(@"Bluetooth must be enabled in order to act as an iBeacon");
return;
}

// Now we construct a CLBeaconRegion to represent ourself


NSDictionary *toBroadcast = [_rangedRegion peripheralDataWithMeasuredPower:@-60];

7
8
9

[_cbPeripheralManager startAdvertising:toBroadcast];

10
11

120

Day 17: iBeacons

Firstly we check that the peripheral manager is ready to go, before constructing the settings to broadcast, and
then beginning to advertise the details. The measuredPower argument specifies the power in dBs observed at
a distance of 1m from the transmitter.

Hiding

We can stop the iBeacon by calling the stopAdvertising method on the CBPeripheralManager object.

Beacon Ranging
Using CoreLocation, we can request alerts when an iBeacon with a particular ID comes into range, or get
regular updates as to the approximate range of all local beacons. In our HotOrCold game we are going to
request range updates for the beacon we created above.
We need to create a CoreLocation CLLocationManager:

Day 17: iBeacons

1
2

121

_clLocationManager = [CLLocationManager new];


_clLocationManager.delegate = self;

Notice that were setting the delegate as well, and well implement the following delegate method:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

- (void)locationManager:(CLLocationManager *)manager
didRangeBeacons:(NSArray *)beacons
inRegion:(CLBeaconRegion *)region
{
if([region isEqual:_rangedRegion]) {
// Let's just take the first beacon
CLBeacon *beacon = [beacons firstObject];
self.statusLabel.textColor = [UIColor whiteColor];
self.signalStrengthLabel.textColor = [UIColor whiteColor];
self.signalStrengthLabel.text = [NSString stringWithFormat:@"%ddB", beacon.rssi];
switch (beacon.proximity) {
case CLProximityUnknown:
self.view.backgroundColor = [UIColor blueColor];
[self setStatus:@"Freezing!"];
break;

16

case CLProximityFar:
self.view.backgroundColor = [UIColor blueColor];
[self setStatus:@"Cold!"];
break;

17
18
19
20
21

case CLProximityImmediate:
self.view.backgroundColor = [UIColor purpleColor];
[self setStatus:@"Warmer"];
break;

22
23
24
25
26

case CLProximityNear:
self.view.backgroundColor = [UIColor redColor];
[self setStatus:@"HOT!"];
break;

27
28
29
30
31

default:
break;

32
33

34

35
36

This delegate method responds to ranging updates from beacons (well register to receive these in a moment).
The delegate method gets called at a frequency of 1 Hz, and is provided with an array of beacons. A CLBeacon
has properties which determine its identity, and also the approximate range of the beacon. Were using this
to set the background color of the view and update the status label using the following utility method:

Day 17: iBeacons

1
2
3
4
5

122

- (void)setStatus:(NSString *)status
{
self.statusLabel.hidden = NO;
self.statusLabel.text = status;
}

In order for this delegate method to be called, we need ask the location manager to start ranging for a
particular beacon:
1

[_clLocationManager startRangingBeaconsInRegion:_rangedRegion];

This has a complimentary method to stop the beacon ranging:


1

[_clLocationManager stopRangingBeaconsInRegion:_rangedRegion];

If you run up this app on 2 devices (both of which have Bluetooth LE) and set one to hide and one to seek you
can play HotOrCold yourself:

Conclusion
iBeacons offer fantastic potential - they could even be one of the most disruptive new features of iOS7. I think
they are both Apples answer to, and the final nail in the coffin, of NFC on mobile devices. Hopefully not
only will our phones soon have the correct information available to us as we arrive at a service desk, but we
might also start to see indoor navigation. I encourage you to take a look at the iBeacon API - its not very
complicated, and I look forward to seeing your innovative uses!

Day 18: Detecting Face Features with


CoreImage
Introduction
Face detection has been present in iOS since iOS 5, in both AVFoundation and CoreImage. In iOS7, the face
detection in CoreImage has been enhanced, to include feature detection - including looking for smiles and
blinking eyes. The API is nice and easy to use, so were going to create an app which uses the face detection
in AVFoundation to determine when to take a photo, and then will let the user know whether or not it is a
good photo by using CoreImage to search for smiles and closed eyes.

Face detection with AVFoundation


Day 16s post was about using AVFoundation to detect and decode QR codes, via the AVCaptureMetadataOutput
class. The face detector is used in the same way - faces are just metadata objects in the same way that a QR
code was. Well create a AVCaptureMetadataOutput object in the same manner, but with a different metadata
type:
1
2
3
4
5
6
7

AVCaptureMetadataOutput *output = [[AVCaptureMetadataOutput alloc] init];


// Have to add the output before setting metadata types
[_session addOutput:output];
// We're only interested in faces
[output setMetadataObjectTypes:@[AVMetadataObjectTypeFace]];
// This VC is the delegate. Please call us on the main queue
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];

We implement the delegate method as before:


1
2
3
4
5
6
7
8
9
10
11

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputMetadataObjects:(NSArray *)metadataObjects
fromConnection:(AVCaptureConnection *)connection
{
for(AVMetadataObject *metadataObject in metadataObjects) {
if([metadataObject.type isEqualToString:AVMetadataObjectTypeFace]) {
// Take an image of the face and pass to CoreImage for detection
AVCaptureConnection *stillConnection =
[_stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[_stillImageOutput
captureStillImageAsynchronouslyFromConnection:stillConnection

Day 18: Detecting Face Features with CoreImage

124

completionHandler:
^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if(error) {
NSLog(@"There was a problem");
return;
}

12
13
14
15
16
17
18

NSData *jpegData = [AVCaptureStillImageOutput


jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

19
20
21

UIImage *smileyImage = [UIImage imageWithData:jpegData];


_previewLayer.hidden = YES;
[_session stopRunning];
self.imageView.hidden = NO;
self.imageView.image = smileyImage;
self.activityView.hidden = NO;
self.statusLabel.text = @"Processing";
self.statusLabel.hidden = NO;

22
23
24
25
26
27
28
29
30

CIImage *image = [CIImage imageWithData:jpegData];


[self imageContainsSmiles:image callback:^(BOOL happyFace) {
if(happyFace) {
self.statusLabel.text = @"Happy Face Found!";
} else {
self.statusLabel.text = @"Not a good photo...";
}
self.activityView.hidden = YES;
self.retakeButton.hidden = NO;
}];

31
32
33
34
35
36
37
38
39
40

}];

41

42

43
44

This is fairly similar to what we did with QR codes, only now we have added a new output type to the session
- AVCaptureStillImageOutput. This allows us to take a photo of the input at a given moment - which is
exactly what captureStillImageAsynchronouslyFromConnection:completionHandler: does. So, when we
are notified that AVFoundation has detected a face, we take a still image of the current input, and stop the
session.
We create a JPEG representation of the captured image with the following:
1
2

NSData *jpegData = [AVCaptureStillImageOutput


jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

Day 18: Detecting Face Features with CoreImage

125

Now we pop this into an UIImageView, and create a CIImage version as well, in preparation for the CoreImage
facial feature detection. Well take a look at this imageContainsSmiles:callback: method next.

Feature finding with CoreImage


CoreImage requires a CIContext and a CIDetector:
1
2
3

if(!_ciContext) {
_ciContext = [CIContext contextWithOptions:nil];
}

4
5
6
7
8
9

if(!_faceDetector) {
_faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace
context:_ciContext
options:nil];
}

To get the detector to perform its search, we invoke the featuresInImage:options: method:
1
2
3
4

NSArray *features = [_faceDetector featuresInImage:image


options:@{CIDetectorEyeBlink: @YES,
CIDetectorSmile: @YES,
CIDetectorImageOrientation: @5}];

In order to get the detector to perform smile and blink detection we have to specify as such in the detector
options (CIDetectorEyeBlink and CIDetectorSmile). The CoreImage face detector is orientation specific, and
therefore were also setting the detector orientation here to match the orientation in which the app has been
designed.
Now we can loop through the features array (which contains CIFaceFeature objects) and interrogate each
one to find out whether it contains a smile or blinking eyes:
1
2
3
4
5
6
7
8
9
10
11
12

BOOL happyPicture = NO;


if([features count] > 0) {
happyPicture = YES;
}
for(CIFeature *feature in features) {
if ([feature isKindOfClass:[CIFaceFeature class]]) {
CIFaceFeature *faceFeature = (CIFaceFeature *)feature;
if(!faceFeature.hasSmile) {
happyPicture = NO;
}
if(faceFeature.leftEyeClosed || faceFeature.rightEyeClosed) {
happyPicture = NO;

Day 18: Detecting Face Features with CoreImage

13

14
15

126

Finally we perform the callback on the main queue:


1
2
3

dispatch_async(dispatch_get_main_queue(), ^{
callback(happyPicture);
});

Our callback method updates the label to describe whether or not a good photo was taken:
1
2
3
4
5
6
7
8
9

[self imageContainsSmiles:image callback:^(BOOL happyFace) {


if(happyFace) {
self.statusLabel.text = @"Happy Face Found!";
} else {
self.statusLabel.text = @"Not a good photo...";
}
self.activityView.hidden = YES;
self.retakeButton.hidden = NO;
}];

If you run the app up you can see how good the CoreImage facial feature detector is:

In addition to these properties, its also possible to find the positions of the different facial features, such as
the eyes and the mouth.

Day 18: Detecting Face Features with CoreImage

127

Conclusion
Although not a ground-breaking addition to the API, this advance in the CoreImage facial detector adds a
nice ability to interrogate your facial images. It could make a nice addition to a photography app - helping
users take all the selfies they need.

Day 19: UITableView Row Height Estimation


Introduction
Today were going to take a look at a fairly small addition to the UIKit API, but one which could make quite
a difference to the user experience of apps with complex table views. Row height estimation takes the form of
an additional method on the table view delegate, which, rather than having to return the exact height of every
row at initial load, allows an estimated size to be returned instead. Well look at why this is an advantage in
todays post. In order to demonstrate its potential well construct a slightly contrived app which has a table
view which we can view both with and without row height estimation.

Without estimation
We create a simple UITableView with a UITableViewController, containing just 1 section with 200 rows. The
cells contain their index and their height, which varies on a row-by-row basis. This is important - if all the
rows are the same height then we dont need to implement the heightForRowAtIndexPath: method on the
delegate, and we wont get any improvement out of using the new row height estimation method.
1
2
3
4
5

- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView
{
// Return the number of sections.
return 1;
}

6
7
8
9
10
11
12

- (NSInteger)tableView:(UITableView *)tableView
numberOfRowsInSection:(NSInteger)section
{
// Return the number of rows in the section.
return 200;
}

13
14
15
16
17
18
19

- (UITableViewCell *)tableView:(UITableView *)tableView


cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
static NSString *CellIdentifier = @"Cell";
UITableViewCell *cell = [tableView
dequeueReusableCellWithIdentifier:CellIdentifier forIndexPath:indexPath];

20
21
22

// Configure the cell...


cell.textLabel.text = [NSString stringWithFormat:@"Cell %03d", indexPath.row];

Day 19: UITableView Row Height Estimation

CGFloat height = [self heightForRowAtIndex:indexPath.row];


cell.detailTextLabel.text = [NSString stringWithFormat:@"Height %0.2f", height];
return cell;

23
24
25
26

129

The heightForRowAtIndex: method is a utility method which will return the height of a given row:
1
2
3
4
5
6
7
8
9

- (CGFloat)heightForRowAtIndex:(NSUInteger)index
{
CGFloat result;
for (NSInteger i=0; i < 1e5; i++) {
result = sqrt((double)i);
}
result = (index % 3 + 1) * 20.0;
return result;
}

If we had a complex table with cells of differing heights, it is likely that we would have to construct the cell
to be able to determine its height, which takes a long time. To simulate this weve put a superfluous loop
calculation in the height calculation method - it isnt of any use, but takes some computational time.
We also need a delegate to return the row heights as we go, so we create SCNonEstimatingTableViewDelegate:
1
2
3

@interface SCNonEstimatingTableViewDelegate : NSObject <UITableViewDelegate>


- (instancetype)initWithHeightBlock:(CGFloat (^)(NSUInteger index))heightBlock;
@end

This has a constructor which takes a block which is used to calculate the row height of a given row:
1
2
3
4

@implementation SCNonEstimatingTableViewDelegate
{
CGFloat (^_heightBlock)(NSUInteger index);
}

5
6
7
8
9
10
11
12
13
14

- (instancetype)initWithHeightBlock:(CGFloat (^)(NSUInteger))heightBlock
{
self = [super init];
if(self) {
_heightBlock = [heightBlock copy];
}
return self;
}
@end

And we implement the relevant delegate method:

Day 19: UITableView Row Height Estimation

1
2
3
4
5
6
7

130

#pragma mark - UITableViewDelegate methods


- (CGFloat)tableView:(UITableView *)tableView
heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
NSLog(@"Height (row %d)", indexPath.row);
return _heightBlock(indexPath.row);
}

This logs that it has been called and uses the block to calculate the row height for the specified index path.
With a bit of wiring up in the view controller then were done:
1
2
3

- (void)viewDidLoad
{
[super viewDidLoad];

_delegate = [[SCNonEstimatingTableViewDelegate alloc]


initWithHeightBlock:^CGFloat(NSUInteger index) {
return [self heightForRowAtIndex:index];
}];
self.tableView.delegate = _delegate;

5
6
7
8
9
10

Running the app up now will demonstrate the variable row height table:

131

Day 19: UITableView Row Height Estimation

TableView

Looking at the log messages we can see that the row height method gets called for every single row in the
table before we first render the table. This is because the table view needs to know its total height (for drawing
the scroll bar etc). This can present a problem in complex table views, where calculating the height of a row
is a complex operation - it might involve fetching the content, or rendering the cell to discover how much
space is required. Its not always an easy operation. Our heightForRowAtIndex: utility method simulates this
complexity with a long loop of calculations. Adding a bit of timing logic we can see that in this contrived
example (and running on a simulator) we have a delay of nearly half a second from loading the tableview, to
it appearing:

Without estimation

Day 19: UITableView Row Height Estimation

132

With estimation
The new height estimation delegate methods provide a way to improve this initial delay to rendering the
table. If we implement tableView:estimatedHeightForRowAtIndexPath: in addition to the aforementioned
tableView:heightForRowAtIndexPath: then rather than calling the height method for every row before
rendering the tableview, the estimatedHeight method will be called for every row, and the height method
just for rows which are being rendered on the screen. Therefore, we have separated the height calculation into
a method which requires the exact height (since the cell is about to appear on screen), and a method which is
just used to calculate the height of the entire tableview (hence doesnt need to be perfectly accurate).
To demonstrate this in action we create a new delegate which will implement the height estimation method:
1
2
3
4

@interface SCEstimatingTableViewDelegate : SCNonEstimatingTableViewDelegate


- (instancetype)initWithHeightBlock:(CGFloat (^)(NSUInteger index))heightBlock
estimationBlock:(CGFloat (^)(NSUInteger index))estimationBlock;
@end

Here weve got a constructor with 2 blocks, one will be used for the exact height method, and one for the
estimation:
1
2
3

@implementation SCEstimatingTableViewDelegate {
CGFloat (^_estimationBlock)(NSUInteger index);
}

4
5
6
7
8
9
10
11
12
13
14

- (instancetype)initWithHeightBlock:(CGFloat (^)(NSUInteger index))heightBlock


estimationBlock:(CGFloat (^)(NSUInteger index))estimationBlock
{
self = [super initWithHeightBlock:heightBlock];
if(self) {
_estimationBlock = [estimationBlock copy];
}
return self;
}
@end

And then we implement the new estimation method:

133

Day 19: UITableView Row Height Estimation

1
2
3
4
5
6
7

#pragma mark - UITableViewDelegate methods


- (CGFloat)tableView:(UITableView *)tableView
estimatedHeightForRowAtIndexPath:(NSIndexPath *)indexPath
{
NSLog(@"Estimating height (row %d)", indexPath.row);
return _estimationBlock(indexPath.row);
}

Updating the view controller with a much cheaper height estimation method - just returning the average
height for our cells (40.0).
1
2
3

- (void)viewDidLoad
{
[super viewDidLoad];

if(self.enableEstimation) {
_delegate = [[SCEstimatingTableViewDelegate alloc]initWithHeightBlock:
^CGFloat(NSUInteger index) {
return [self heightForRowAtIndex:index];
} estimationBlock:^CGFloat(NSUInteger index) {
return 40.0;
}];
} else {
_delegate = [[SCNonEstimatingTableViewDelegate alloc] initWithHeightBlock:
^CGFloat(NSUInteger index) {
return [self heightForRowAtIndex:index];
}];
}
self.tableView.delegate = _delegate;

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

Running the app up now and observing the log and well see that the height method no longer gets called for
every cell before initial render, but instead the estimated height method. The height method is called just for
the cells which are being rendered on the screen. Consequently see that the load time has dropped to a fifth
of a second:

With Estimation

Conclusion
As was mentioned before, this example is a little contrived, but it does demonstrate rather well that if
calculating the actual height is hard work then implementing the new estimation height method can really

Day 19: UITableView Row Height Estimation

134

improve the responsiveness of your app, particularly if you have a large tableview. There are additional height
estimation methods for section headers and footers which work in precisely the same manner. It might not be
a groundbreaking API change, but in some cases it can really improve the user experience, so its definitely
worth doing.

Day 20: View controller content and


navigation bars
Introduction
Todays post is a little different from the previous posts in this series. Since adopting iOS7, many developers have been struggling with the appearance of their view controllers behind the navigation bar of
UINavigationControllers. Were going to take a look at why this is and attempt to explain how to get the
desired behavior.

iOS7 View Controller Changes: The theory


In iOS7 all view controllers use full screen layout, which means that the wantsFullScreenLayout property is
deprecated. However, we now have additional control over the way in which view controllers are displayed.
The following properties are configurable both in code and in interface builder:
edgesForExtendedLayout This defines which of the views edges should be extended to the edge of the
screen - underneath whatever bars might be in the way. i.e. underneath the bars (such as the navigation
bar) which are present. By default this is set to UIRectEdgeAll, to
extendedLayoutIncludesOpaqueBars By default the edges will only be extended underneath bars if
they are translucent, however by setting this property to YES will cause the edges to be extended under
opaque bars as well.
automaticallyAdjustsScrollViewInsets This is probably the most powerful property - if your view
contains a scroll view then it will have its content insets set so that the content will scroll underneath
the bars, but itll be possible to scroll to see all the content. The is set to YES by default, and this is the
iOS7 recommended behavior.
topLayoutGuide, bottomLayoutGuide These are properties which are generated to match the extent of
the visible area of the view - i.e. if there is a bar at the top of the screen then the topLayoutGuide will
be positioned at the bottom of the bar.

In Practice
Reading through the property descriptions above might make you think that its all very easy, and in my
experience it is. In some cases. Otherwise its just confusing.

View controller inside a navigation controller


Lets address the simplest case first: a view controller inside a navigation controller.

136

Day 20: View controller content and navigation bars

Here we need to set the edgesForExtendedLayout correctly, otherwise your view will appear underneath the
bar. This can be set in interface builder as follows:

Interface Builder

Or in code with:
1

self.edgesForExtendedLayout = UIRectEdgeNone;

We can see before and after below:

Scroll view inside a navigation controller


The effect we want for a scroll view inside a nav controller is that it is possible to scroll to see all the content,
but as you scroll the content disappears underneath the bars. Enter automaticallyAdjustsScrollViewInsets:
this is precisely what this does does. With it set to NO we see the following behavior:

Day 20: View controller content and navigation bars

137

And changing it to YES the following:

Table view inside a navigation controller


UITableView is a subclass of UIScrollView so wed expect the same behavior we saw in the previous section,
and indeed we do. automaticallyAdjustsScrollViewInsets is again the property we need to play with to

get the desired behavior:

Day 20: View controller content and navigation bars

138

Other cases
If you run up the accompanying sample app for todays post then youll notice that there are some other
examples provided - namely scrollview inside a tab controller, and a tableview inside a tab controller. For
some reason (I think it is a bug, but would love to be corrected), the scroll view insets are no longer adjusted
as they were inside the navigation controller:

Day 20: View controller content and navigation bars

139

Conclusion
The fact that all view controllers are now full screen has foxed a lot of developers, and with good reason. The
documentation around them isnt great, and I think there might be a bug in the scroll view inset adjustment
for tab bar controllers. However, it is worth playing around with - the concept of multiple layers is integral
to the new iOS7 look and feel, and when it works it does look rather good.

Day 21: Multi-column TextKit text rendering


Introduction
In day 12 we took a look at some of the powerful functionality available for rendering text using TextKit, in
the form of dynamic type and font descriptors. Today were going to look at another aspect of TextKit - with
a demo of creating multi-column text layouts.
In the past, creating a multi-column layout of text in iOS has been hard work: potentially you could create
multiple UITextViews and manually cut the text to fit into each view, which will break with dynamic content,
or you could drop to the underlying layout engine CoreText, which is far from simple to use.
The introduction of TextKit in iOS7 changes this, and its now incredibly easy to create lots of different text
layouts, including multi-page, multi-column and exclusion zones. In todays DbD well take a look at how to
build a multi-column paging text display, which renders a simple text file.

TextKit
TextKit is a massive framework, and this post isnt going to attempt to explain it in great detail at all. In order
to understand the multi-column project there are 4 classes to be familiar with:
NSTextStorage: A subclass of NSAttributedString and contains both the content and formatting markup for the text we wish to render. It enabled editing and keeps references to relevant layout managers
to inform them of changes in the underlying text store.
NSLayoutManager: Responsible for managing the rendering the text from a store in one or multiple text
container objects. Converts the underlying unicode characters into glyphs which can be rendered on
screen. Can have multiple text containers to allow flowing of the text between different regions.
NSTextContainer: Defines the region in which the text will be rendered. This is provided with glyphs
from the layout manager and fills the area it specifies. Can use UIBezierPath objects as exclusion zones.
UITextView: Actually render the text on screen. It has been updated for iOS7 with the addition of a
constructor which takes an NSTextContainer.
We are going to use all of these classes to create a multi-column text view. For far more information about
the TextKit architecture and how to use it then take a look at the TextKit Tutorial from our very own Colin
Eberhardt.

Multiple Columns
Were going to put all the code into a view controller, so need some ivars to keep hold of the text store and
the layout manager:
http://www.raywenderlich.com/50151/text-kit-tutorial
https://twitter.com/colineberhardt

Day 21: Multi-column TextKit text rendering

1
2
3
4
5

141

@interface SCViewController () {
NSLayoutManager *_layoutManager;
NSTextStorage *_textStorage;
}
@end

Well create these in viewDidLoad, firstly lets look at the text storage. Weve got a .txt file as part of the bundle, which contains some plain-text Lorem Ipsum. Since NSTextStorage is a subclass of NSAttributedString
we can use the initWithFileURL:options:documentAttributes:error constructor:
1
2
3
4
5
6
7

// Import the content into a text storage object


NSURL *contentURL = [[NSBundle mainBundle] URLForResource:@"content"
withExtension:@"txt"];
_textStorage = [[NSTextStorage alloc] initWithFileURL:contentURL
options:nil
documentAttributes:NULL
error:NULL];

Creating a layout manager is simple too:


1
2
3

// Create a layout manager


_layoutManager = [[NSLayoutManager alloc] init];
[_textStorage addLayoutManager:_layoutManager];

4
5
6

// Layout the text containers


[self layoutTextContainers];

Once weve created the _layoutManager we add it to the _textStorage. This not only provides the text content
to the layout manager, but will also ensure that if the underlying content changes the layout manager will be
informed appropriately.
At the end of viewDidLoad were calling layoutTextContainers which is a utility method well take a look
at now.
We are going to loop through each of the columns, creating a new NSTextContainer, to specify the dimensions
of the text, and a UITextView to render it on the screen. The loop looks like this:

Day 21: Multi-column TextKit text rendering

1
2
3
4
5

142

NSUInteger lastRenderedGlyph = 0;
CGFloat currentXOffset = 0;
while (lastRenderedGlyph < _layoutManager.numberOfGlyphs) {
...
}

6
7
8
9

// Need to update the scrollView size


CGSize contentSize = CGSizeMake(currentXOffset, CGRectGetHeight(self.scrollView.bounds));
self.scrollView.contentSize = contentSize;

We set up a couple of variables - one which will allow the loop to end (lastRenderedGlyph), and one to store
the x-offset of the current column. NSLayoutManager has a property which contains the total number of glyphs
which it is responsible for, so were going to loop through until weve drawn all the glyphs we have.
After the loop has completed were going to work out the correct size of the content weve created, and set it
on the scrollview, so that we can move between the pages as expected.
Inside the loop, the first thing we need to do is work out the dimensions of the current column:
1
2
3
4
5

CGRect textViewFrame = CGRectMake(currentXOffset, 10,


CGRectGetWidth(self.view.bounds) / 2,
CGRectGetHeight(self.view.bounds) - 20);
CGSize columnSize = CGSizeMake(CGRectGetWidth(textViewFrame) - 20,
CGRectGetHeight(textViewFrame) - 10);

Were setting the column to be the full height of the view, and half the width.
Now we can create an NSTextContainer to layout the glyphs within the column area we have specified:
1
2

NSTextContainer *textContainer = [[NSTextContainer alloc] initWithSize:columnSize];


[_layoutManager addTextContainer:textContainer];

We also add the text container to the layout manager. This ensures that the container is provided with a
sequence of glyphs to render.
In order to get the container to render on the screen, we have to create a UITextView:
1
2
3
4
5

// And a text view to render it


UITextView *textView = [[UITextView alloc] initWithFrame:textViewFrame
textContainer:textContainer];
textView.scrollEnabled = NO;
[self.scrollView addSubview:textView];

Here were specifying the textContainer the text view is going to represent - using the newly introduced
initWithFrame:textContainer: method.
Finally we need to update our local variables for tracking the last rendered glyph and current column position:

Day 21: Multi-column TextKit text rendering

1
2

143

// Increase the current offset


currentXOffset += CGRectGetWidth(textViewFrame);

3
4
5

// And find the index of the glyph we've just rendered


lastRenderedGlyph = NSMaxRange([_layoutManager glyphRangeForTextContainer:textContainer]);

For those of you who have tried to create text columns in iOS before, youll be amazed to hear that were
done! If you run the app up now youll see the Lorem Ipsum content nicely laid out in columns half the
screen width, and with swiping enabled to move between pages:

Conclusion
TextKit is a major addition to iOS and represents some extremely powerful functionality. Weve taken a look
today at how easy it is to put text into columns, and this barely scratches the surface of what is available. I
encourage you to investigate TextKit further if you are displaying any more than small amounts of text - its
actually one of the new areas of iOS7 with pretty good documentation.

Day 22: Downloadable Fonts


Introduction
iOS comes with a selection of pre-installed fonts, but it is by no means exhaustive. In order to save disk-space
with the install image, iOS provides a mechanism for downloading and using fonts at run-time.
Apple provides a set of fonts which they host and license for use, including fonts for non-roman alphabets,
and a selection of fonts users are used to using on desktop applications. The font-downloading functionality
has been available since iOS6, but in iOS7 theres a much larger list of available fonts.
Downloaded fonts are stored somewhere on the system - as app developers we dont have access to where
the fonts are stored. The font we require might well have already been downloaded at the request of another
app, however, if this isnt the case we need to be ready for the situation where the user doesnt have network
connectivity and therefore our chosen font isnt available. Or when there is a delay downloading the requested
font - do we switch the fonts out when theyre available?
Firstly well take a look at how to get a list of fonts, before then demonstrating how to download and use a
specific font.

Listing available fonts


The API for downloading fonts is not part of TextKit, but rather the underlying rendering engine CoreText.
This therefore means that rather than dealing with Cocoa objects, were going to see a lot of CoreFoundation
objects, and well be leaning on toll-free bridging to make our lives easier.
The function in CoreText we need to use is CTFontDescriptorCreateMatchingFontDescriptors, and we use
it to match an attribute which labels the font as a downloadable one: kCTFontDownloadableAttribute.
1
2
3
4
5

NSDictionary *descriptorOptions = @{(id)kCTFontDownloadableAttribute : @YES};


CTFontDescriptorRef descriptor =
CTFontDescriptorCreateWithAttributes((CFDictionaryRef)descriptorOptions);
CFArrayRef fontDescriptors =
CTFontDescriptorCreateMatchingFontDescriptors(descriptor, NULL);

The first line, we create an NSDictionary of descriptor attributes - here just specifying that were only
interested in fonts which are downloadable. Then we create a CTFontDescriptorRef using this dictionary
- note here that we cast the NSDictionary to a CFDictionaryRef - making use of toll-free bridging. Finally we
call the method which will provide us with a list of fonts descriptors which match this descriptor we provided
- i.e. a list of descriptors which represent downloadable fonts.
The call to this last method is blocking, and may require a network call, so were going to wrap this
functionality up in a requestDownloadableFontList method:

Day 22: Downloadable Fonts

1
2
3
4
5
6
7
8

145

- (void)requestDownloadableFontList
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
NSDictionary *descriptorOptions = @{(id)kCTFontDownloadableAttribute : @YES};
CTFontDescriptorRef descriptor =
CTFontDescriptorCreateWithAttributes((CFDictionaryRef)descriptorOptions);
CFArrayRef fontDescriptors =
CTFontDescriptorCreateMatchingFontDescriptors(descriptor, NULL);

dispatch_async(dispatch_get_main_queue(), ^{
[self fontListDownloadComplete:(NSArray *)CFBridgingRelease(fontDescriptors)];
});

10
11
12
13

// Need to release the font descriptor


CFRelease(descriptor);

14
15

});

16
17

Things to note about this completed method:


We perform the request asynchronously on a background queue, so that we dont block the main thread.
Therefore we marshal a call to the fontListDownloadComplete: method back on to the main queue.
This completion method expects an NSArray but we have a CFArrayRef, so we cast it to an NSArray.
Since the method which created the CFArrayRef has the word Create in its name, we need to transfer
ownership of the object into ARC with a CFBridgingRelease call.
Finally, we need to release the font descriptor with CFRelease, for the same reason.
In the sample app which accompanies todays post we present these results as a table view which at the first
level displays font family names. Tapping on one of the family names will then push a new tableview into the
navigation controller which displays all the fonts within that family. Therefore, at the top level, we implement
the following method for fontDownloadListComplete:
1
2
3
4
5
6
7
8
9
10
11
12

- (void)fontListDownloadComplete:(NSArray *)fontList
{
// Need to reorganise array into dictionary
NSMutableDictionary *fontFamilies = [NSMutableDictionary new];
for(UIFontDescriptor *descriptor in fontList) {
NSString *fontFamilyName = [descriptor
objectForKey:UIFontDescriptorFamilyAttribute];
NSMutableArray *fontDescriptors = [fontFamilies objectForKey:fontFamilyName];
if(!fontDescriptors) {
fontDescriptors = [NSMutableArray new];
[fontFamilies setObject:fontDescriptors forKey:fontFamilyName];
}

Day 22: Downloadable Fonts

146

13

[fontDescriptors addObject:descriptor];

14

15
16

_fontList = [fontFamilies copy];

17
18

[self.tableView reloadData];

19
20

Here we are simply re-organising the array of font descriptors into a dictionary, arranged by font family.
Were making use here of the fact that UIFontDescriptor is toll-free bridged with CTFontDescriptorRef.
Once we have arranged the data correctly, we can reload the table. With the tableview datasource methods
set appropriately, and viewDidLoad:
1
2
3
4
5

- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
self.title = @"Families";

[self requestDownloadableFontList];

7
8

we can run the app up and see that the first page of the navigation controller will look like this.

Day 22: Downloadable Fonts

147

The next level of the navigation controller displays the fonts within a specific family, so to do that we create
an NSArray property which contains a list of font descriptors. We set this in the prepareForSegue: method
of the first view controller:
1
2
3
4
5
6
7
8
9
10
11

- (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender


{
if ([[segue identifier] isEqualToString:@"ShowFamily"]) {
SCFontViewController *vc = [segue destinationViewController];
NSIndexPath *indexPath = [self.tableView indexPathForSelectedRow];
NSString *fontFamilyName = [_fontList allKeys][indexPath.row];
NSArray *fontList = _fontList[fontFamilyName];
vc.fontList = fontList;
vc.title = fontFamilyName;
}
}

With appropriate datasource methods, the second level of the drill-down will look like this:

Day 22: Downloadable Fonts

148

Downloading a font
The final stage of the app will display what the font looks like with some sample glyphs, if the font is available.
Otherwise the user will have the opportunity to download the font.
The download process is completely within the handleDownloadPressed: method, and the function were
interested in is CTFontDescriptorMatchFontDescriptorsWithProgressHandler. This takes a CFArrayRef of
font descriptors and downloads the font if required. It takes a block as a parameter which provides updates
of the user. This method returns immediately, and the operation is performed on a background queue.

Day 22: Downloadable Fonts

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

149

- (IBAction)handleDownloadPressed:(id)sender {
self.downloadProgressBar.hidden = NO;
CTFontDescriptorMatchFontDescriptorsWithProgressHandler(
(CFArrayRef)@[_fontDescriptor],
NULL,
^bool(CTFontDescriptorMatchingState state, CFDictionaryRef progressParameter) {
double progressValue = [[(__bridge NSDictionary *)progressParameter
objectForKey:(id)kCTFontDescriptorMatchingPercentage] doubleValue];
if (state == kCTFontDescriptorMatchingDidFinish) {
dispatch_async(dispatch_get_main_queue(), ^{
self.downloadProgressBar.hidden = YES;
[self updateView];
});
} else {
dispatch_async(dispatch_get_main_queue(), ^{
self.downloadProgressBar.progress = progressValue;
});
}
return (bool)YES;
}
);
}

In the progress block, we extract the current progress percentage from the provided dictionary, and update
the progress bar as appropriate. If the state parameter suggests that the download has been completed, we
call updateView, which is a method we have created to apply the font to the sample glyphs. Note that we have
to ensure that the UI updates are performed on the main thread, as we usually do:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

- (void)updateView
{
NSString *fontName = [self.fontDescriptor objectForKey:UIFontDescriptorNameAttribute];
self.title = fontName;
UIFont *font = [UIFont fontWithName:fontName size:26.f];
if(font && [font.fontName isEqualToString:fontName]) {
self.sampleTextLabel.font = font;
self.downloadButton.enabled = NO;
self.detailDescriptionLabel.text = @"Font available";
} else {
self.sampleTextLabel.font = [UIFont systemFontOfSize:font.pointSize];
self.downloadButton.enabled = YES;
self.detailDescriptionLabel.text = @"This font is not yet downloaded";
}
}

Running the app up now will allow us to browse through the list of available fonts from Apple, and download
each of them to try them out.

Day 22: Downloadable Fonts

150

Day 22: Downloadable Fonts

151

Conclusion
Downloadable fonts are a handy feature which will allow you to customize the appearance of your app
without having to license a font and bundle it with your app. However, its important to ensure that you
handle the case where the user doesnt have network connectivity - what should the fall-back font be, and
does the UI work with both options.

Day 23: Multipeer Connectivity


Introduction
One of the entirely new frameworks which was introduced in iOS7 was MultipeerConnectivity. This
represents a very Apple approach to what is traditionally a difficult problem: given that mobile devices
all have multiple different radio technologies built in to them, surely they should be able to communicate
with each other without having to send data via the internet. In the past it would have been possible to create
an ad-hoc wifi network, or pair devices over bluetooth, but neither of these options has presented a very
user-friendly approach. With the MultipeerConnectivity framework this changes - the mechanics of setting
up networks is abstracted away from both the user and the developer, and instead communication takes place
via a technology-agnostic API.
In reality the framework whatever technology it has available - whether it be bluetooth, or wifi, either
using an infrastructure network, or ad-hoc networking if the devices dont share the same network. This
is truly brilliant - the user just gets to select which of the surrounding devices it wishes to connect to and the
framework will handle all the rest. It is even capable of using a node as a router between 2 nodes which cant
see each other in a mesh-network manner.
In todays post well run through the code thats needed to set up a multipeer network like this, and how to
send data between devices.

Browsing for devices


In order to send data, its necessary to establish a connection between devices, which is done with one device
browsing for appropriate devices within range. A request can then be sent to one of these devices, which
will alert the user - allowing them to accept or reject the connection. If the connection is accepted then the
framework will establish the link and allow data to be transferred.
There are 2 ways to browse for local devices - a visual one, and a programmatic version. Were only going to
look at the visual approach.
All nodes in the multipeer network have to have an ID - which is represented by the MCPeerID class:
1

_peerID = [[MCPeerID alloc] initWithDisplayName:self.peerNameTextField.text];

Here were allowing the user to enter a name which will be used to identify their device to users they attempt
to collect to.
The MCSession object is used to coordinate sending data between peers within that session. We firstly create
one and then add peers to it:

Day 23: Multipeer Connectivity

1
2

153

_session = [[MCSession alloc] initWithPeer:_peerID];


_session.delegate = self;
MCSession has a delegate property which adopts the MCSessionDelegate protocol. This includes methods for

monitoring as peers change state (e.g. disconnect), along with methods which are called when a peer in the
network initiates a data transfer.
In order to add peers to the session there is a ViewController subclass which presents a list of local devices
to the user and allows them to select which they would like to establish a connection with. We create one of
these and then present it as a modal view controller:
1
2
3
4

MCBrowserViewController *browserVC = [[MCBrowserViewController alloc]


initWithServiceType:@"shinobi-stream" session:_session];
browserVC.delegate = self;
[self presentViewController:browserVC animated:YES completion:NULL];

The serviceType argument is a string which represents the service were trying to connect to. This string can
comprise of lowercase characters, numbers and hyphens, and should be of a bonjour-like domain.
Again we assign self to the delegate property - this time adopting the MCBrowserViewControllerDelegate
protocol. There are two methods we need to implement - for completion and cancellation of the browser view
controller. Here were going to dismiss the browser and enable a button if we were successful:
1
2
3
4
5

#pragma mark - MCBrowserViewControllerDelegate methods


- (void)browserViewControllerWasCancelled:(MCBrowserViewController *)browserViewController
{
[browserViewController dismissViewControllerAnimated:YES completion:NULL];
}

6
7
8
9
10
11
12

- (void)browserViewControllerDidFinish:(MCBrowserViewController *)browserViewController
{
[browserViewController dismissViewControllerAnimated:YES completion:^{
self.takePhotoButton.enabled = YES;
}];
}

If we run the app up at this point well be able to input a peer name, and then bring up the browser to
search for other devices. At this stage we dont havent implemented the advertising functionality for other
devices, so we cant connect to anything. Well implement this in the next section, the pictures below show
the connection process if we do have a device to connect to, and the connection is accepted:

Day 23: Multipeer Connectivity

154

Advertising availability
Advertising availability is made possible through the MCAdvertiserAssistant class, which is responsible both
for managing the network layer, and also presenting an alert to the user to allow them to accept or reject an
incoming connection.
In the same way that we needed a session and peer ID to browse, we need them for advertising, so again we
allow the user to specify a string to be used as a peer name:
1
2
3
4
5
6
7

_peerID = [[MCPeerID alloc] initWithDisplayName:self.peerNameTextField.text];


_session = [[MCSession alloc] initWithPeer:_peerID];
_session.delegate = self;
_advertiserAssistant = [[MCAdvertiserAssistant alloc]
initWithServiceType:@"shinobi-stream"
discoveryInfo:nil
session:_session];

Were using the same string for the serviceType parameter as we did within the browser - this will enable
the connections to be matched appropriately.
Finally we need to start advertising our availability:
1

[_advertiserAssistant start];

155

Day 23: Multipeer Connectivity

If we now fire up the browser on one device, and the advertiser on another then they should be able to find
each other. When the device appears in the browser, and the user taps on it, then the user with the advertising
device will be presented with an alert allowing them to choose whether or not to make the connection:

permission

Sending Data
There are 3 ways in which data can be transferred over the multipeer network weve established - an NSData
object, an NSStream or sending a file-based resource. All three of these share a common paradigm - the
MCSession object has methods to initiate each of these transfers, and then the session at the receiving end will
call the appropriate delegate method.
For example, were going to take a photo with one device and then have it automagically appear on the screen
of the other device. Well use the NSData approach for this example, but the methodology is very similar for
each of them.
We use UIImagePickerController to take a simple photo

Day 23: Multipeer Connectivity

1
2
3
4

156

UIImagePickerController *imagePicker = [UIImagePickerController new];


imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
imagePicker.delegate = self;
[self presentViewController:imagePicker animated:YES completion:NULL];

And implement the following delegate method to get the photo out as expected:
1
2
3
4
5
6
7
8
9
10
11
12

- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *photo = info[UIImagePickerControllerOriginalImage];
UIImage *smallerPhoto = [self rescaleImage:photo toSize:CGSizeMake(800, 600)];
NSData *jpeg = UIImageJPEGRepresentation(smallerPhoto, 0.2);
[self dismissViewControllerAnimated:YES completion:^{
NSError *error = nil;
[_session sendData:jpeg toPeers:[_session connectedPeers]
withMode:MCSessionSendDataReliable error:&error];
}];
}

The line of interest here is the call to sendData:toPeers:withMode:error: on the MCSession object. This can
take an NSData object and send it to other peers in the network. Here were selecting to send it to all the peers
in the network. The mode allows you to select whether or not you want the data transferred reliably or not.
If you select reliable then the messages will definitely arrive and will be in the correct order, but will have a
higher time overhead. Using the unreliable mode means that some messages may be lost, but the delay will
be much smaller.
To receive the data on the other device we just provide an appropriate implementation for the correct delegate
method:
1
2
3
4
5
6
7
8

- (void)session:(MCSession *)session
didReceiveData:(NSData *)data
fromPeer:(MCPeerID *)peerID
{
UIImage *image = [UIImage imageWithData:data];
self.imageView.image = image;
self.imageView.contentScaleFactor = UIViewContentModeScaleAspectFill;
}

Here were simply creating a UIImage from the NSData object, and then setting it as the image for on a
UIImageView. The following pictures show the photo being taken on one device, and then displayed on another:

Day 23: Multipeer Connectivity

157

The streaming and resource APIs work in much the same way, although the resource API provides
asynchronous progress updates, and is hence more suitable for large data transfers.

Conclusion
The MultipeerConnectivity framework is incredibly powerful, and Apple-like in its concept of abstracting the
fiddly technical details away from the developer. Its pretty obvious that the new AirDrop functionality which
appeared in iOS7 is built on top of this framework, and thats very much the tip of the iceberg in terms of what
could be built using this framework. Imagine an iBeacon which, when youre near it, not only notifies you of
the fact, but then sends you information without using the internet. Maybe you could have multi-angle video
streamed to your device at a sports event, but only if youre in the venue? I cant wait to see what people
build!

Afterword
24 days worth of new features is pretty impressive. And this list is by no means exhaustive. Weve covered a
lot of ground, and I hope that youve learnt something along the way.
If you have any feedback about the book or its content then Id love to hear it - hit me up on twitter at
@iwantmyrealname, or email me sdavies@shinobicontrols.com.
The day-by-day format is a lot of fun to create, and has hopefully been useful to you. I might well consider
producing similar blog series on different topics in the future - any suggestions or comments will be greatly
appreciated.

Useful Links
Ive compiled a few useful links of interest, for further reading:
shinobicontrols.com/blog - ShinobiContols blog - to keep up to date on ShinobiControls products, and
other technical series such as this one.
iwantmyreal.name - My personal blog
raywenderlich.com - Excellent resource for learning iOS, including the new book iOS 7 by tutorials
Whats new in iOS7 - Apple documentation for the new features introduced in iOS7.

https://twitter.com/iwantmyrealname
mailto:sdavies@shinobicontrols.com
http://www.shinobicontrols.com/blog
http://iwantmyreal.name/
http://www.raywenderlich.com/
https://developer.apple.com/library/ios/releasenotes/General/WhatsNewIniOS/Articles/iOS7.html#//apple_ref/doc/uid/TP40013162-SW1

S-ar putea să vă placă și