Documente Academic
Documente Profesional
Documente Cultură
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing process. Lean
Publishing is the act of publishing an in-progress ebook using lightweight tools and many iterations to get
reader feedback, pivot until you have the right book and build traction once you do.
2013 Scott Logic
Contents
Preface . . . .
Audience .
Book layout
Source code
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
4
8
Day 1: NSURLSession . .
Simple download . . .
Tracking progress . . .
Canceling a download
Resumable download .
Background download
Summary . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
9
11
12
12
13
15
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
16
16
17
19
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
20
20
20
21
23
24
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
25
25
26
27
27
.
.
.
.
.
.
.
.
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
28
28
30
38
42
Day 6: TintColor . . . . . . . . . . .
Tint color of existing iOS controls
Tint Dimming . . . . . . . . . . .
Using tint color in custom views .
Tinting images with tintColor . .
Conclusion . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
43
43
44
45
47
49
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
50
50
50
54
56
58
59
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
60
60
60
61
61
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
62
62
62
62
62
63
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
64
64
65
67
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
68
68
68
70
72
74
75
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
Introduction . . .
Dynamic Type .
Font Descriptors
Conclusion . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
75
75
78
80
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
81
81
81
83
84
86
87
89
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 90
. 90
. 90
. 94
. 100
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
101
101
101
103
106
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
107
107
107
109
112
117
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
118
118
118
120
122
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
123
123
123
125
127
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
135
135
135
135
139
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
140
140
140
140
143
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
144
144
144
148
151
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
152
152
152
154
155
157
Afterword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Useful Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Preface
Welcome along to iOS7 Day-by-Day! In September of 2013 Apple released the 7th version of their exceedingly
popular mobile operating system into the world. With it came a new user interface appearance, new icons
and lots of other little changes for users to complain about. However, the most exciting changes were, as
ever, in the underlying APIs - with new frameworks and considerable new functionality added to existing
frameworks.
There are so many changes in fact that its very difficult for a busy developer to pore through the release
notes to discover the features which they can take advantage of. Therefore I wrote and published a daily blog
series, in which each article discussed a new feature, and created a sample app to demonstrate it.
This series was very successful, and ran for a total of 24 days - covering many parts of the new operating
system, including both the big headline frameworks and also the somewhat smaller hidden gems. The only
notable omissions are the game-related frameworks, such as SpriteKit and changes to GameCenter. This
is, unapologetically, because I have little experience of games, and also felt that these were being covered
extensively elsewhere.
This book represents the sum-total of the blog series - each chapter represents a different post in the dayby-day series, with only minor changes. The original posts are still available online, and may offer some
additional information in the form of comments.
If you have any comments or corrections for the book then do let me know - Im @iwantmyrealname on
twitter.
Audience
Each chapter in this book is about a feature which was introduced in iOS7, and therefore is primarily targeted
at developers who have had some experience of building iOS apps. Having said that, non-developers familiar
with iOS might be interested in reading the new features available.
If you are new to iOS development its probably worth reading through some of the introductory material
available elsewhere - e.g. the excellent tutorials available on raywenderlich.com.
Book layout
This book is a collection of daily blog posts, which on the most-part stand alone. There are one or two which
cross-reference each other, but they can be read entirely independently.
The chapters arent meant to be complete tutorials, and as such, the code snippets within each chapter usually
just highlight the more salient bits of code associated with a particular step. However, each chapter has an
accompanying working app, the source code for which can be found on GitHub.
http://www.shinobicontrols.com/blog/posts/2013/09/19/introducing-ios7-day-by-day/
https://twitter.com/iwantmyrealname
http://www.raywenderlich.com
Preface
Source code
The GitHub repository at github.com/ShinobiControls/ios7-day-by-day contains projects which accompany
each chapter, organized by day number.
The projects are all built using Xcode 5, and should run straight after downloading. Any pull-requests for
fixes and improvements will be greatly appreciated!
https://github.com/ShinobiControls/ios7-day-by-day
Building a pendulum
Remembering back to high school science - one of the simplest objects studied in Newtonian physics is a
pendulum. Lets create a UIView to represent the ball-bearing:
1
2
3
4
5
6
7
Now we can add some behaviors to this ball bearing. Well create a composite behavior to collect the behavior
together:
1
Next well start adding the behaviors we wish to model - first up gravity:
1
2
3
which allow you to configure the vector of the gravitational force (i.e. both magnitude and direction). Here
we are increasing the magnitude of the force, but keeping it directed in an increasing y direction.
The other behavior we need to apply to our ball bearing is an attachment behavior - which represents the
string from which it hangs:
1
2
3
4
5
They have properties which control the behavior of the attaching string - specifying its frequent, damping
and length. The default values for this ensure a completely rigid attachment, which is what we want for a
pendulum.
Now the behaviors are specified on the ball bearing we can create the physics engine to look after it all, which
is defined as an ivar UIDynamicAnimator *_animator;:
1
2
create it and specify which view it should use as its reference view (i.e. specifying the spatial universe) and
add the composite behavior weve built.
With that weve actually created our first UIKit Dynamics system. However, if you run up the app, nothing
will happen. This is because the system starts in and equilibrium state - we need to perturb the system to see
some motion.
In the target for the gesture recognizer we apply a constant force behavior to the ball bearing:
1
2
3
4
5
6
7
8
9
10
11
- (void)handleBallBearingPan:(UIPanGestureRecognizer *)recognizer
{
// If we're starting the gesture then create a drag force
if (recognizer.state == UIGestureRecognizerStateBegan) {
if(_userDragBehavior) {
[_animator removeBehavior:_userDragBehavior];
}
_userDragBehavior = [[UIPushBehavior alloc] initWithItems:@[recognizer.view]
mode:UIPushBehaviorModeContinuous];
[_animator addBehavior:_userDragBehavior];
}
12
13
14
15
16
17
18
19
20
21
22
23
}
UIPushBehavior represents a simple linear force applied to objects. We use the callback to apply a force to the
ball bearing, which displaces it. We have an ivar UIPushBehavior *_userDragBehavior which we create when
a gesture start, remembering to add it to the dynamics animator. We set the size of the force to be proportional
to the horizontal displacement. In order for the pendulum to swing we remove the push behavior when the
gesture has ended.
Newtons Cradle
To recreate this using UIKit Dynamics we need to create multiple pendulums - following the same pattern
for each of them as we did above. They should be spaced so that they arent quite touching (see the sample
code for details).
We also need to add a new behavior which will describe how the ball bearings collide with each other. We
now have an ivar to store the ball bearings NSArray *_ballBearings;:
1
2
3
Here were using a collision behavior and a set of objects which are modeled in the system. Collision behaviors
can also be used to model objects hitting boundaries such as view boundaries, or arbitrary bezier path
boundaries.
If you run the app now and try to move one of the pendulums youll notice that the cradle doesnt behave as
you would expect it to. This is because the collisions are currently not elastic. We need to add a special type
of dynamic behavior to specify various shared properties:
1
2
3
4
5
6
7
We use UIDynamicItemBehavior to specify the elasticity of the collisions, along with some other properties
such as resistance (pretty much air resistance) and rotation. If we allow rotation we can specify the angular
resistance. The dynamic item behavior also allows setting of linear and angular velocity which can be useful
when matching velocities with gestures.
Running the app up now will show a Newtons cradle which behaves exactly as you would expect it in the
real world. Maybe as an extension you could investigate drawing the strings of the pendulums as well as the
ball bearings.
The code which accompanies this post represents the completed Newtons cradle project. It uses all the
elements introduced, but just tidies them up a little into a demo app.
Conclusion
This introduction to UIKit Dynamics has barely scratched the surface - with these building blocks really
complex physical systems can be modeled. This opens the door for creating apps which are heavily influenced
by our inherent understanding of motion and object interactions from the real world.
Day 1: NSURLSession
In the past networking for iOS was performed using NSURLConnection which used the global state to manage
cookies and authentication. Therefore it was possible to have 2 different connections competing with each
other for shared settings. NSURLSession sets out to solve this problem and a host of others as well.
The project which accompanies this guide includes the three different download scenarios discussed forthwith. This post wont describe the entire project - just the salient parts associated with the new NSURLSession
API.
Simple download
NSURLSession represents the entire state associated with multiple connections, which was formerly a shared
global state. Session objects are created with a factory method which takes a configuration object. There are
3 types of possible sessions:
1. Default, in-process session
2. Ephemeral (in-memory), in-process session
3. Background session
For a simple download well just use a default session:
1
2
NSURLSessionConfiguration *sessionConfig =
[NSURLSessionConfiguration defaultSessionConfiguration];
Once a configuration object has been created there are properties on it which control the way it behaves.
For example, its possible to set acceptable levels of TLS security, whether cookies are allowed and timeouts.
Two of the more interesting properties are allowsCellularAccess and discretionary. The former specifies
whether a device is permitted to run the networking session when only a cellular radio is available. Setting
a session as discretionary enables the operating system to schedule the network access to sensible times i.e. when a WiFi network is available, and when the device has good power. This is primarily of use for
background sessions, and as such defaults to true for a background session.
Once we have a session configuration object we can create the session itself:
1
2
3
4
NSURLSession *inProcessSession;
inProcessSession = [NSURLSession sessionWithConfiguration:sessionConfig
delegate:self
delegateQueue:nil];
Day 1: NSURLSession
10
Note here that were also setting ourselves as a delegate. Delegate methods are used to notify us of the
progress of data transfers and to request information when challenged for authentication. Well implement
some appropriate methods soon.
Data transfers are encapsulated in tasks - of which there are three types:
1. Data task (NSURLSessionDataTask)
2. Upload task (NSURLSessionUploadTask)
3. Download task (NSURLSessionDownloadTask)
In order to perform a transfer within the session we need to create a task. For a simple file download:
1
2
3
4
5
6
NSURLSessionDownloadTask *cancellableTask =
[inProcessSession downloadTaskWithRequest:request];
[cancellableTask resume];
Thats all there is to it - the session will now asynchronously attempt to download the file at the specified
URL.
In order to get hold of the requested file download we need to implement a delegate method:
1
2
3
4
5
6
- (void)URLSession:(NSURLSession *)session
downloadTask:(NSURLSessionDownloadTask *)downloadTask
didFinishDownloadingToURL:(NSURL *)location
{
// We've successfully finished the download. Let's save the file
NSFileManager *fileManager = [NSFileManager defaultManager];
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Day 1: NSURLSession
11
if (success)
{
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *image = [UIImage imageWithContentsOfFile:[destinationPath path]];
self.imageView.image = image;
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
self.imageView.hidden = NO;
});
}
else
{
NSLog(@"Couldn't copy the downloaded file");
}
22
23
24
25
26
27
28
29
30
31
32
33
34
35
if(downloadTask == cancellableTask) {
cancellableTask = nil;
}
36
37
38
39
This method is defined on NSURLSessionDownloadTaskDelegate. We get passed the temporary location of the
downloaded file, so in this code were saving it off to the documents directory and then (since we have a
picture) displaying it to the user.
The above delegate method only gets called if the download task succeeds. The following method is
on NSURLSessionDelegate and gets called after every task finishes, irrespective of whether it completes
successfully:
1
2
3
4
5
6
7
8
- (void)URLSession:(NSURLSession *)session
task:(NSURLSessionTask *)task
didCompleteWithError:(NSError *)error
{
dispatch_async(dispatch_get_main_queue(), ^{
self.progressIndicator.hidden = YES;
});
}
If the error object is nil then the task completed without a problem. Otherwise its possible to query it to
find out what the problem was. If a partial download has been completed then the error object contains a
reference to an NSData object which can be used to resume the transfer at a later stage.
Tracking progress
Youll have noticed that we hid a progress indicator as part of the task completion method at the end of the last
section. Updating the progress of this progress bar couldnt be easier. There is an additional delegate method
which is called zero or more times during in the tasks lifetime:
Day 1: NSURLSession
1
2
3
4
5
6
7
8
9
10
11
12
12
- (void)URLSession:(NSURLSession *)session
downloadTask:(NSURLSessionDownloadTask *)downloadTask
didWriteData:(int64_t)bytesWritten
BytesWritten:(int64_t)totalBytesWritten
totalBytesExpectedToWrite:(int64_t)totalBytesExpectedToWrite
{
double currentProgress = totalBytesWritten / (double)totalBytesExpectedToWrite;
dispatch_async(dispatch_get_main_queue(), ^{
self.progressIndicator.hidden = NO;
self.progressIndicator.progress = currentProgress;
});
}
This is another method which is part of the NSURLSessionDownloadTaskDelegate, and we use it here to
estimate the progress and update the progress indicator.
Canceling a download
Once an NSURLConnection had been sent off it was impossible to cancel it. This is different with an easy ability
to cancel the an NSURLSessionTask:
1
2
3
4
5
6
- (IBAction)cancelCancellable:(id)sender {
if(cancellableTask) {
[cancellableTask cancel];
cancellableTask = nil;
}
}
Its as easy as that! Its worth noting that the URLSession:task:didCompleteWithError: delegate method will
be called once a task has been canceled to enable you to update the UI appropriately. Its quite possible that after canceling a task the URLSession:downloadTask:didWriteData:BytesWritten:totalBytesExpectedToWrite:
method might be called again, however, the didComplete method will definitely be last.
Resumable download
Its also possible to resume a download pretty easily. There is an alternative cancel method which provides
an NSData object which can be used to create a new task to continue the transfer at a later stage. If the server
supports resuming downloads then the data object will include the bytes already downloaded:
Day 1: NSURLSession
1
2
3
4
5
6
7
8
13
- (IBAction)cancelCancellable:(id)sender {
if(self.resumableTask) {
[self.resumableTask cancelByProducingResumeData:^(NSData *resumeData) {
partialDownload = resumeData;
self.resumableTask = nil;
}];
}
}
Here weve popped the resume data into an ivar which we can later use to resume the download.
When creating the download task, rather than supplying a request you can provide a resume data object:
1
2
3
4
5
6
7
8
9
10
11
if(!self.resumableTask) {
if(partialDownload) {
self.resumableTask = [inProcessSession
downloadTaskWithResumeData:partialDownload];
} else {
NSString *url = @"http://url/for/image";
NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:url]];
self.resumableTask = [inProcessSession downloadTaskWithRequest:request];
}
[self.resumableTask resume];
}
If weve got a partialDownload object then we create the task using that, otherwise we create the task as we
did before.
The only other thing to remember here is that we need to set partialDownload = nil; when the process
ends.
Background download
The other major feature that NSURLSession introduces is the ability to continue data transfers even when your
app isnt running. In order to do this we configure a session to be a background session:
1
2
3
4
5
6
7
8
9
- (NSURLSession *)backgroundSession
{
static NSURLSession *backgroundSession = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
NSString *confStr = @"com.shinobicontrols.BackgroundDownload.BackgroundSession"
NSURLSessionConfiguration *config = [NSURLSessionConfiguration
backgroundSessionConfiguration:confStr];
backgroundSession = [NSURLSession sessionWithConfiguration:config
14
Day 1: NSURLSession
delegate:self
delegateQueue:nil];
10
11
});
return backgroundSession;
12
13
14
Its important to note that we can only create one session with a given background token, hence the dispatch
once block. The purpose of the token is to allow us to collect the session once our app is restarted. Creating a
background session starts up a background transfer daemon which will manage the data transfer for us. This
will continue to run even when the app has been suspended or terminated.
Starting a background download task is exactly the same as we did before - all of the background
functionality is managed by the NSURLSession we have just created:
1
2
3
4
Now, even when you press the home button to leave the app, the download will continue in the background
(subject to the configuration options mentioned at the start).
When the download is completed then iOS will restart your app to let it know - and to pass it the payload. To
do this it calls the following method on your app delegate:
1
2
3
4
5
6
- (void)application:(UIApplication *)application
handleEventsForBackgroundURLSession:(NSString *)identifier
completionHandler:(void (^)())completionHandler
{
self.backgroundURLSessionCompletionHandler = completionHandler;
}
Here we get passed a completion handler, which once weve accepted the downloaded data and updated our
UI appropriately, we should call. Here were saving off the completion handler (remembering that blocks
have to be copied), and letting the loading of the view controller manage the data handling. When the view
controller is loaded it creates the background session (which sets the delegate) and therefore the same delegate
methods we were using before are called.
Day 1: NSURLSession
1
2
3
4
5
6
15
- (void)URLSession:(NSURLSession *)session
downloadTask:(NSURLSessionDownloadTask *)downloadTask
didFinishDownloadingToURL:(NSURL *)location
{
// Save the file off as before, and set it as an image view
//...
if (session == self.backgroundSession) {
self.backgroundTask = nil;
// Get hold of the app delegate
SCAppDelegate *appDelegate =
(SCAppDelegate *)[[UIApplication sharedApplication] delegate];
if(appDelegate.backgroundURLSessionCompletionHandler) {
// Need to copy the completion handler
void (^handler)() = appDelegate.backgroundURLSessionCompletionHandler;
appDelegate.backgroundURLSessionCompletionHandler = nil;
handler();
}
}
8
9
10
11
12
13
14
15
16
17
18
19
20
Summary
NSURLSession provides a lot of new invaluable features for dealing with networking in iOS (and OSX 10.9)
and replaces the old way of doing things. Its worth getting to grips with it and using it for all apps that can
be targetted at the new operating systems.
17
AppIcon selection
Simply dragging images from the finder into the asset catalog manager in finder will bring the image into the
asset catalog. If you have provided an incorrectly sized image this will raise a warning in Xcode:
Custom imagesets
As well as the standard collections, you can use asset catalogs to manage your own images. Images are
contained within an ImageSet, with a reference for both retina and non-retina versions of the same image.
18
Creating an image set is done within Xcode, and you can organize image sets within folders for ease of
navigation. Using the images stored inside an asset catalog is as simple as using UIImage:imageNamed::
1
Slicing images
The other major feature of asset catalogs is the ability to do image slicing. Creating images which are resizable in this manner has been available since iOS 2, but this new feature in Xcode makes it really simple to
do.
Resizing images using slicing is a common technique for creating visual elements such as buttons - where
the center of the image should be stretched or tiled to the new size, and the edges should be stretched in one
direction only and the corners should remain the same size.
Slicing is available on an ImageSet within the asset catalog - enabled by clicking the Show Slicing button.
You can choose horizontal, vertical or both for scaling direction. Your image will then be overlaid with guides
which mark the fixed endpoints, and size of the re-sizable central section:
Slice ImageSet
Using these sliced images is really easy - simply create a UIImage as before, and then when you resize the
UIImageView used to display it, the image will rescale as per the slicing.
19
2
3
4
5
6
7
// Let's make 2
UIImageView *iv = [[UIImageView alloc] initWithImage:btnImage];
iv.bounds = CGRectMake(0, 0, 150, CGRectGetHeight(iv.bounds));
iv.center = CGPointMake(CGRectGetWidth(self.view.bounds) / 2, 300);
[self.view addSubview:iv];
8
9
10
11
12
13
Sliced result
Conclusion
Asset catalogs arent a ground-breaking addition to the iOS developers toolkit, but they really do take some
of the pain out of the fiddly aspects of development. They come as enabled for new projects with Xcode 5,
and will make asset management a much less arduous task.
The other thing that you need to do is specify how often you would like to be woken up to perform a
background fetch. If you know that your data is only going to be updated every hour, then thats information
that the iOS fetch scheduler can use. If you arent sure then you can use the recommended value:
1
2
3
4
5
6
21
- (BOOL)application:(UIApplication *)application
didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
// Set the fetch interval so that it is actually called
[[UIApplication sharedApplication]
setMinimumBackgroundFetchInterval:UIApplicationBackgroundFetchIntervalMinimum];
return YES;
8
9
Implementation
When a background fetch occurs, iOS starts the app and then makes a call to the application delegate method
application: performFetchWithCompletionHandler:. The app then has a certain amount of time to perform
the fetch and call the completion handler block it has been provided.
The project which accompanies this article is a traffic status app - which has simulates receiving notifications
about traffic conditions on roads and then displaying them in a UITableView. In this demo, the updates are
randomly generated - and this can be seen from pulling the table to refresh, which has the following method
as its target:
1
2
3
4
5
6
- (void)refreshStatus:(id)sender
{
[self createNewStatusUpdatesWithMin:0 max:3 completionBlock:^{
[refreshControl endRefreshing];
}];
}
- (NSUInteger)createNewStatusUpdatesWithMin:(NSUInteger)min
max:(NSUInteger)max
completionBlock:(SCTrafficStatusCreationComplete)compHandler
{
NSUInteger numberToCreate = arc4random_uniform(max-min) + min;
NSMutableArray *indexPathsToUpdate = [NSMutableArray array];
7
8
9
10
11
22
12
[self.tableView insertRowsAtIndexPaths:indexPathsToUpdate
withRowAnimation:UITableViewRowAnimationFade];
if(compHandler) {
compHandler();
}
13
14
15
16
17
18
return numberToCreate;
19
20
In this we create a random number of random updates (using the randomStatus method on SCTrafficStatus,
which, as its name suggests, generates a random status object). We then update our backing data store, refresh
the table and call the completion handler. This is all standard UITableView code, and this is where you can
slot in the code which actually updates your datastore from the network.
In order to add the facility to create updates using background fetch, we add a method to the API of our view
controller:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
- (NSUInteger)insertStatusObjectsForFetchWithCompletionHandler:
(void (^)(UIBackgroundFetchResult))completionHandler
{
NSUInteger numberCreated = [self createNewStatusUpdatesWithMin:0
max:3
completionBlock:NULL];
NSLog(@"Background fetch completed - %d new updates", numberCreated);
UIBackgroundFetchResult result = UIBackgroundFetchResultNoData;
if(numberCreated > 0) {
result = UIBackgroundFetchResultNewData;
}
completionHandler(result);
return numberCreated;
}
This method takes a completion handler of the form used by the app delegate background fetch method - so
we can use this later on. First were creating some new updates, using the method we described before. The
completion handler needs to be informed whether the update worked, and if it did, whether new data was
delivered. We establish this using the return value of our create method, and then call the completion handler
with the appropriate result.
This completion handler is used to tell iOS that were done and that, if appropriate, were ready to have our
snapshot taken to update the display in the app launcher.
Finally, we need to link this up with the app delegate method:
23
1
2
3
4
5
6
7
8
9
10
- (void)application:(UIApplication *)application
performFetchWithCompletionHandler:(void (^)(UIBackgroundFetchResult))completionHandler
{
// Get hold of the view controller
SCViewController *vc = (SCViewController *)self.window.rootViewController;
// Insert status updates and pass in the completion handler block
NSUInteger numberInserted =
[vc insertStatusObjectsForFetchWithCompletionHandler:completionHandler];
[UIApplication sharedApplication].applicationIconBadgeNumber += numberInserted;
}
Now, when the app is woken up for a background fetch, it will call through to the view controller, and perform
the update. Refreshingly simple.
Testing
So far, we havent tested any of this code, and its not immediately obvious how to simulate background fetch
events. Xcode 5 has this sorted, but before we dive in we need to consider 2 cases:
1. App currently running in background
The user has started the app and has left it to do something else, but the app is continuing to run in the
background (i.e. it hasnt been terminated). Xcode provides a new debugging method to simulate this, so
testing is as simple as running up the app, pressing the home button and then invoking the new debug method:
Whilst debugging its a good idea to have some logging in your fetch update methods to observe the fetch
event taking place. In the sample app, this will update the apps badge on the home screen.
1. App currently in terminated state
24
The app has run before, but was terminated, either by the user or by iOS. The easiest way to simulate this
is to add a new scheme to Xcode. Click manage schemes from the scheme drop down in Xcode, and then
duplicate the existing scheme. Editing the new scheme then update the run task with the option to launch as
a background fetch process:
Now, when you run this scheme youll see the simulator start up, but your app wont be lauched. If youve got
some logging in the background fetch delegate method then youll see that output. See the attached project
for an example of this.
Conclusion
Background fetch offers the opportunity to enhance the user experience of your app for a small amount of
effort. If your app relies on data updates from the internet, then this is a really simple way to ensure that your
user always has the latest information when the app launches.
Voices
iOS 7 contains a set of different voices which can be used for speech synthesis. You can use these to specify
the language and variant you wish to synthesize. AVSpeechSynthesisVoice:speechVoices returns an array
of the available voices:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
26
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
"[AVSpeechSynthesisVoice
27
28
29
30
31
32
33
34
35
36
37
38
0x978b5d0]
0x978b620]
0x978b670]
0x978b6c0]
0x978aed0]
0x978af20]
0x978b810]
0x978b860]
0x978b8b0]
0x978b900]
0x978b950]
Language:
Language:
Language:
Language:
Language:
Language:
Language:
Language:
Language:
Language:
Language:
ar-SA",
ko-KR",
cs-CZ",
en-ZA",
en-AU",
da-DK",
en-US",
en-IE",
hi-IN",
el-GR",
ja-JP"
If the language isnt recognized then the return value will be nil.
Utterances
An utterance represents a section of speech - a collection of which can be passed to the speech synthesizer
to create a stream of speech. An utterance is created with the string which will be spoken by the speech
synthesizer:
1
2
AVSpeechUtterance *utterance =
[AVSpeechUtterance speechUtteranceWithString:@"Hello world!"];
We can specify the voice for an utterance with the voice property:
1
utterence.voice = voice;
There are other properties which can be set on an utterance, including rate, volume and pitchMultiplier.
For example, to slow down the speech a little:
1
utterance.rate *= 0.7;
Once an utterance has been created it can be passed to a speech synthesizer object which will cause the audio
to be generated:
1
2
27
Utterances are queued by the synthesizer, so you can continue to pass utterances without waiting the speech
to be completed. If you attempt to pass an utterance instance which is currently in the queue then an exception
will be thrown.
Implementation
The sample project which accompanies this article is a multi-lingual greeting app. This demonstrates the
versatility of the speech synthesis functionality present in iOS 7.
Its important to note that the strings which define the utterances are all specified in the roman alphabet - e.g.
Ni hao in Chinese. The sample project defines a class which creates utterances for a set of languages.
The project has a picker to allow the user to choose a language and then a button to hear the greeting spoken
in the appropriate language.
Conclusion
Speech synthesis has been made really simple in iOS 7, with a wide range of languages. Used sensibly it has
potential for improving accessibility and enabling hands/eyes-free operation of apps.
Building a Carousel
In order to demonstrate using the physics engine with a collection view, we firstly need to make a carousel out
of a UICollectionView. This post isnt a tutorial on how to use UICollectionView, so Ill skip briefly through
this part. Well make the view controller the datasource and delegate for the collection view, and implement
the methods we need:
1
2
3
4
5
6
7
8
9
10
11
- (NSInteger)collectionView:(UICollectionView *)collectionView
numberOfItemsInSection:(NSInteger)section
{
return [_collectionViewCellContent count];
}
12
13
14
15
16
17
18
19
20
21
22
23
29
24
25
26
27
28
29
30
31
The cells are each square tiles which contain a number inside a UILabel. The numbers of the cells we
are currently displaying in the collection view are stored inside an array (_colletionViewCellContent) as
NSNumber objects. We do this to preserve the ordering of the cells - not important at this stage, but will be
once we work out how to insert new cells.
In order to get the collection view to appear as a horizontal carousel we need to provide a custom layout. As
is often the case, the flow layout has a lot of what we need, so well subclass that:
1
2
3
In order to force all of the items into a horizontal carousel at the bottom of the view we need to know the
item height - hence the constructor which requires an item size. We override the prepareLayout method to
set the content inset to push the items to the bottom of the collection view:
1
2
3
4
5
6
7
8
- (void)prepareLayout
{
// We update the section inset before we layout
self.sectionInset = UIEdgeInsetsMake(
CGRectGetHeight(self.collectionView.bounds) - _itemSize.height,
0, 0, 0);
[super prepareLayout];
}
Setting this as the layout on the collection view will create the horizontal carousel were after.
30
1
2
3
- (void)viewDidLoad
{
[super viewDidLoad];
...
// Provide the layout
_collectionViewLayout = [[SCSpringyCarousel alloc] initWithItemSize:itemSize];
self.collectionView.collectionViewLayout = _collectionViewLayout;
5
6
7
8
9
Non-springy carousel
Adding springs
Now on to the more exciting stuff - lets fix this up with the UIKit Dynamics physics engine.
The physical model were going to use has each item connected to the position it would have been fixed to in
a vanilla flow layout - i.e. the we take the items from the carousel weve already made, and attach the them
to their positions with springs. Then, as we scroll the view, the springs will stretch and well get the effect
we want. Well, nearly, we need to perturb the springs a distance proportional to the distance from the touch
point, but well come to that when the time is right.
Translating this model into a UIDynamics concept is as follows: - When we are preparing the layout we
request the positioning information from the flow layout super class. - We add appropriate behaviors to these
positioning objects to allow them to be animated in the physics world. - These behaviors and position objects
are passed to the animator so that the simulation can run. - The methods on the UICollectionViewLayout
are overridden to return the positions from the animator, instead of the flow layout superclass.
This all sounds a lot more complicated than it actually is - honestly! Lets work through it in stages.
31
Behavior Manager
In order to keep the code nice and tidy, well create a class which manages the dynamic behaviors inside the
animator. Its API should look like the following:
1
2
3
4
5
6
7
- (instancetype)initWithAnimator:(UIDynamicAnimator *)animator;
8
9
10
11
12
13
The behavior of each of our cells is constructed from shared UIGravityBehavior and UICollisionBehavior
objects and an individual UIAttachmentBehvaior. We create our behavior manager with a UIDynamicAnimator
and expose methods for adding, removing items, as well as a method to update the collection to match an
array.
When we create a manager object then we want to create the shared behaviors, and attach them to the
animator:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
- (instancetype)initWithAnimator:(UIDynamicAnimator *)animator
{
self = [super init];
if(self) {
_animator = animator;
_attachmentBehaviors = [NSMutableDictionary dictionary];
[self createGravityBehavior];
[self createCollisionBehavior];
// Add the global behaviors to the animator
[self.animator addBehavior:self.gravityBehavior];
[self.animator addBehavior:self.collisionBehavior];
}
return self;
}
with the 2 utility methods called here being very simple, and having similar composition to what we used for
the Newtons Cradle project back on day 0:
1
2
3
4
5
32
- (void)createGravityBehavior
{
_gravityBehavior = [[UIGravityBehavior alloc] init];
_gravityBehavior.magnitude = 0.3;
}
6
7
8
9
10
11
12
13
14
15
16
17
- (void)createCollisionBehavior
{
_collisionBehavior = [[UICollisionBehavior alloc] init];
_collisionBehavior.collisionMode = UICollisionBehaviorModeBoundaries;
_collisionBehavior.translatesReferenceBoundsIntoBoundary = YES;
// Need to add item behavior specific to this
UIDynamicItemBehavior *itemBehavior = [[UIDynamicItemBehavior alloc] init];
itemBehavior.elasticity = 1;
// Add it as a child behavior
[_collisionBehavior addChildBehavior:itemBehavior];
}
Youll notice that we dont add any dynamic items to the behaviors at this stage - principally because
we dont actually have any yet. The collision behavior isnt going to be used for collisions between the
individual cells, but instead within the boundary of the collection view. Hence the setting the two properties:
collisionMode and translatesReferenceBoundsIntoBoundary. We also add a UIDynamicItemBehavior to
specify the elasticity of the collisions, in the same way that we did with the pendula.
Now we have created these global behaviors we need to implement the addItem: and removeItem: methods.
The add method will add the new item to the global behaviors and also set up the spring which attaches the
cell to the background canvas:
1
2
3
4
5
6
7
8
10
11
12
13
1
2
3
4
5
6
7
8
9
10
11
33
- (UIAttachmentBehavior *)createAttachmentBehaviorForItem:(id<UIDynamicItem>)item
anchor:(CGPoint)anchor
{
UIAttachmentBehavior *attachmentBehavior = [[UIAttachmentBehavior alloc]
initWithItem:item
attachedToAnchor:anchor];
attachmentBehavior.damping = 0.5;
attachmentBehavior.frequency = 0.8;
attachmentBehavior.length = 0;
return attachmentBehavior;
}
We also store the attachment behavior in a dictionary, keyed by the NSIndexPath. This will allow us to work
out which spring we need to remove when we implement the remove method.
Once we created the attachment behavior we add it to the animator, and add the provided item to the shared
gravity and collision behaviors.
The remove method performs exactly the opposite operation - remove the attachment behavior from the
animator and the item from the shared gravity and collision behaviors:
1
2
3
4
5
- (void)removeItemAtIndexPath:(NSIndexPath *)indexPath
{
// Remove the attachment behavior from the animator
UIAttachmentBehavior *attachmentBehavior = self.attachmentBehaviors[indexPath];
[self.animator removeBehavior:attachmentBehavior];
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
34
This method is slightly more complicated than we would like. Removing the attachment behavior is as we
would expect, but removing the item from the shared behaviors is a little more complicated. The item objects
have been copied, and have different references. Therefore we need to search though all of the items the
gravity behavior is acting upon, and remove the one with the same index path. Hence we loop through the
items searching for the item with the same index path.
There is one more method on the API of the behavior manager - updateItemCollection:. This method takes a
collection of items and then calls the addItem:anchor: and removeItem: methods with the correct arguments
to ensure that the manager is currently managing the correct items. Well see very soon why this is useful,
but lets take a look at the implementation:
1
2
3
4
5
6
- (void)updateItemCollection:(NSArray *)items
{
// Let's find the ones we need to remove. We work in indexPaths here
NSMutableSet *toRemove = [NSMutableSet
setWithArray:[self.attachmentBehaviors allKeys]];
[toRemove minusSet:[NSSet setWithArray:[items valueForKeyPath:@"indexPath"]]];
8
9
10
11
12
// Find the items we need to add springs to. A bit more complicated =(
// Loop through the items we want
NSArray *existingIndexPaths = [self currentlyManagedItemIndexPaths];
for(UICollectionViewLayoutAttributes *attr in items) {
// Find whether this item matches an existing index path
BOOL alreadyExists = NO;
for(NSIndexPath *indexPath in existingIndexPaths) {
if ([indexPath isEqual:attr.indexPath]) {
alreadyExists = YES;
}
}
// If it doesn't then let's add it
if(!alreadyExists) {
// Need to add
[self addItem:attr anchor:attr.center];
}
}
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Its a very simple method - we first find the items we need to remove - using some simple set operations
({Items we currently have} / {Items we should have}). Then we loop through the resultant set and call the
removeItem: method.
35
To work out the items we need to add we loop try to find each item in the collection weve been sent in our
dictionary of managed items. If we cant find it then we need to start managing the behavior for it, so we
call the addItem:anchor: method. Importantly, the anchor point is the current center position provided in the
UIDynamicItem object. In terms of the UICollectionView, this means that we want our item to be anchored
to the position the flow layout would like to place them.
- (void)prepareLayout
{
// We update the section inset before we layout
self.sectionInset = UIEdgeInsetsMake(
CGRectGetHeight(self.collectionView.bounds) - _itemSize.height,
0, 0, 0);
[super prepareLayout];
9
10
11
12
13
14
// We update our behavior collection to contain the items we can currently see
[_behaviorManager updateItemCollection:currentItems];
15
16
17
The first few lines of code are exactly as before. We then work out an expanded viewport bounds. This involves
taking the current viewport and expanding it to the left and right, ensuring that the items which are soon
to appear on screen are under the control of our dynamic animator. Once we have the viewport we ask our
superclass for the layout attributes for all the items which would appear within this rectangle - i.e. all the
items which would have appeared within that range had we been using a vanilla flow layout. Like UIView
these UICollectionViewLayoutAttributes objects all adopt the UIDynamicItem protocol, and hence can be
animated by our UIDynamicAnimator. We pass this collection of objects through to our behavior manager to
ensure that we are managing the behavior of the correct items.
The next method we need to override is shouldInvalidateLayoutForBoundsChange:. We dont actually want
to change the behavior of this method (the default returns NO and we wont change this), but it gets called
whenever the bounds of our collection view changes. In the world of scroll views, the bounds property
36
represents the current viewport position - i.e. the x and y values are not necessarily 0 as they usually are.
Therefore, a bounds change event in a UIScrollView subclass actually occurs as the scrollview is scrolled.
This method is the most complicated part of this demo project, so well step through it bit-by-bit.
1
2
3
- (BOOL)shouldInvalidateLayoutForBoundsChange:(CGRect)newBounds
{
CGFloat scrollDelta = newBounds.origin.x - self.collectionView.bounds.origin.x;
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[_dynamicAnimator updateItemUsingCurrentState:attr];
19
20
21
return NO;
22
23
1. Firstly we find out how much we have just scrolled the scroll view - since we were last called, and
hence last updated our springs.
2. We can then find the location of the current touch within the collection view, since we have access to
the panGestureRecognizer of the underlying scrollview.
3. Now we need to loop through each of the springs in the behavior manager, updating them.
4. Firstly we find out how far our items rest position (i.e. the behaviors anchor point) is from the touch.
This is because were going to stretch the springs proportionally to how far they are from our touch
point.
5. Then we work out the new position of the current cell - using a magic scrollFactor and the actual
scrollDelta.
6. We tell the dynamic animator that it should refresh its understanding of the items state. When an item
is added to a dynamic animator it makes an internal copy of the items state and then animates that.
In order to push new state in we update the UIDynamicItem properties and then tell the animator that
it should reload the state of this item.
7. Finally we return NO - we are letting the dynamic animator manage the positions of our cells, we dont
need the collection view to re-request it from the layout.
37
There are 2 more methods we need to override, the purpose of both is to remove the responsibility of item
layout from the flow layout class, and give it instead to the dynamic animator:
1
2
3
4
- (NSArray *)layoutAttributesForElementsInRect:(CGRect)rect
{
return [_dynamicAnimator itemsInRect:rect];
}
5
6
7
8
9
10
- (UICollectionViewLayoutAttributes *)layoutAttributesForItemAtIndexPath:
(NSIndexPath *)indexPath
{
return [_dynamicAnimator layoutAttributesForCellAtIndexPath:indexPath];
}
The dynamic animator has 2 helper methods for precisely this purpose, which plug nicely into the collection
view layout class. These methods are used by the collection view to position the cells. We simply get the
dynamic animator to return the positions of the relevant cells - either by indexPath or for the cells which are
visible in the specified rectangle.
Test run
Well, if you run this project up now you should have a horizontal carousel, which as you drag items around
you get a springy effect - where cells ahead of the drag direction bunch up, and those behind spread out.
38
Inserting items
Now that weve got this springy carousel working, were going to see how difficult it is to govern adding new
cells using the dynamic animator as well as scrolling. Weve actually done a lot of the work, so lets see what
we need to add.
With a standard UICollectionView, the layout provides the layout attributes for an appearing item, and
then the item will be animated to its final position within the collection - i.e. the position returned
by layoutAttributesForItemAtIndexPath:. However, we are going to perform the animation using our
UIDynamicAnimator, and therefore need to prevent UIView animations. To do this add the following line to
prepareLayout:
1
[UIView setAnimationsEnabled:NO];
This will ensure that we dont have 2 different animation processes fighting against each other.
As mentioned, the UICollectionViewLayout will get called to ask for where a new item should be positioned,
using the snappily named initialLayoutAttributesForAppearingItemAtIndexPath: method. We are going
to let our animator handle this:
1
2
3
4
5
- (UICollectionViewLayoutAttributes *)initialLayoutAttributesForAppearingItemAtIndexPath:
(NSIndexPath *)itemIndexPath
{
return [_dynamicAnimator layoutAttributesForCellAtIndexPath:itemIndexPath];
}
Now we actually need to do let the animator know that it a new item arriving, update the positions of the
existing items appropriately, and position the new one. We override the prepareForCollectionViewUpdates:
method on the SCSpringyCarousel class:
1
2
3
4
5
6
- (void)prepareForCollectionViewUpdates:(NSArray *)updateItems
{
for (UICollectionViewUpdateItem *updateItem in updateItems) {
if(updateItem.updateAction == UICollectionUpdateActionInsert) {
// Reset the springs of the existing items
[self resetItemSpringsForInsertAtIndexPath:updateItem.indexPathAfterUpdate];
7
8
9
10
11
12
13
14
15
// Where would the flow layout like to place the new cell?
UICollectionViewLayoutAttributes *attr =
[super initialLayoutAttributesForAppearingItemAtIndexPath:
updateItem.indexPathAfterUpdate];
CGPoint center = attr.center;
CGSize contentSize = [self collectionViewContentSize];
center.y -= contentSize.height - CGRectGetHeight(attr.bounds);
16
17
18
19
20
21
22
23
39
This is a long method, but can break it down into simple chunks:
1. This method gets called for inserts, removals and moves. Were only interested in insertions for this
project, so were only going to do something if our update is of type UICollectionUpdateActionInsert.
2. When an insert happens, the collection view will re-assign the layout attributes of those cells above
the insertion index to their nextmost neighbor - i.e. if inserting at index 4, then the cell currently at
5 will be updated to have the layout attributes of the cell currently at 6 etc. In our scenario we want
to keep the anchor point of the behavior associated with the layout attributes of our neighbor, but the
position should be our current position - not that of our neighbor. We perform this with a utility method
resetItemSpringsForInsertAtIndexPath:, which well look at later.
3. Now we deal with the new cell which is being inserted. We ask the flow layout where it would like to
position it. We want it to appear at the top of the collection view, so that the animator will drop it down
using the gravity behavior. We use this to work out where the center of the inserted cell should be.
4. Now we ask the animator for the layout attributes for the index path were inserting at, and then update
the position to match the one weve just calculated.
The final piece of the puzzle is the aforementioned method which is used to update the springs of the items
moved to make space for the new item:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
- (void)resetItemSpringsForInsertAtIndexPath:(NSIndexPath *)indexPath
{
// Get a list of items, sorted by their indexPath
NSArray *items = [_behaviorManager currentlyManagedItemIndexPaths];
// Now loop backwards, updating centers appropriately.
// We need to get 2 enumerators - copy from one to the other
NSEnumerator *fromEnumerator = [items reverseObjectEnumerator];
// We want to skip the lastmost object in the array as we're copying left to right
[fromEnumerator nextObject];
// Now enumarate the array - through the 'to' positions
[items enumerateObjectsWithOptions:NSEnumerationReverse
usingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
NSIndexPath *toIndex = (NSIndexPath*)obj;
NSIndexPath *fromIndex = (NSIndexPath *)[fromEnumerator nextObject];
15
16
// If the 'from' cell is after the insert then need to reset the springs
17
18
19
20
21
22
23
24
}];
25
26
40
We have already explained the concept above, and the implementation is pretty simple to follow. We use
2 reverse iterators, and copy the position of the cell from one to the other. Then, when the collection view
updates the layout attributes of the cells, the springs will be set to pull them from their old position to their
new one.
We just need to add a button and method to the view controller to manage the item additions. We add the
button in the StoryBoard, and attach it to the following method:
1
2
3
- (IBAction)newViewButtonPressed:(id)sender {
// What's the new number we're creating?
NSNumber *newTile = @([_collectionViewCellContent count]);
5
6
7
8
9
10
// Redraw
[self.collectionView insertItemsAtIndexPaths:@[rightOfCenter]];
11
12
13
Theres a utility method to work out the index which is the right hand side of the center of the currently
visible items:
1
2
3
4
- (NSIndexPath *)indexPathOfItemRightOfCenter
{
// Find all the currently visible items
NSArray *visibleItems = [self.collectionView indexPathsForVisibleItems];
5
6
7
8
9
41
10
// Loop through the visible cells to find the left of center one
for (NSIndexPath *indexPath in visibleItems) {
UICollectionViewCell *cell = [self.collectionView
cellForItemAtIndexPath:indexPath];
if (ABS(CGRectGetMidX(cell.frame) - midX) < ABS(curMin)) {
curMin = CGRectGetMidX(cell.frame) - midX;
indexOfItem = indexPath.item;
}
}
11
12
13
14
15
16
17
18
19
20
// If min is -ve then we have left of centre. If +ve then we have right of centre.
if(curMin < 0) {
indexOfItem += 1;
}
21
22
23
24
25
26
27
28
29
And with that were done. Fire up the app and try adding cells - they drop nicely in and then bounce really cool. Try pressing adding cells whilst the carousel is scrolling - this shows how awesome the dynamic
animator really is!
42
Conclusion
In day 0 we showed how easy the UIKit Dynamics physics engine is to use, but with todays post weve really
got to grips with a real-world example - using it to animate the cells in a collection view. This has some
excellent applications, and despite its apparent complexity, is actually pretty easy to get your head around. I
encourage you to investigate adding subtle animations you collection views, which will delight users, albeit
subconsciously.
Day 6: TintColor
A fairly small an seemingly unobtrusive addition to UIView, the tintColor property is actually incredibly
powerful. Today well look at how to use it, including tinting iOS standard controls, using tintColor in our
own controls and even how to recolor images.
blue color will be used. Therefore, its possible to completely change the appearance of an entire app by setting
the tintColor on the view associated with the root view controller.
To demonstrate this, and to see how tintColor changes the appearance of some standard controls, take a look
at the ColorChanger app.
The storyboard contains a selection of controls - including UIButton, UISlider and UIStepper. Weve linked
a change color button to the following method in the view controller:
1
2
3
4
5
6
7
8
9
10
11
- (IBAction)changeColorHandler:(id)sender {
// Generate a random color
CGFloat hue = ( arc4random() % 256 / 256.0 );
CGFloat saturation = ( arc4random() % 128 / 256.0 ) + 0.5;
CGFloat brightness = ( arc4random() % 128 / 256.0 ) + 0.5;
UIColor *color = [UIColor colorWithHue:hue
saturation:saturation
brightness:brightness
alpha:1];
self.view.tintColor = color;
}
The majority of this method is concerned with generating a random color - the final line is all that is needed
to change the tint color, and hence the appearance of all the different controls.
One UI control which doesnt respond to tintColor changes as you might expect is UIProgressView. This is
because it actually has 2 tint colors - one for the progress bar itself, and one for the background track. In order
to get this to change color along with the other UI controls, we add the following method:
Day 6: TintColor
1
2
3
4
44
- (void)updateProgressViewTint
{
self.progressView.progressTintColor = self.view.tintColor;
}
Tint Dimming
In addition to being able to set a tint color, there is another property on UIView, which allows you to
dim the tint color - hence dimming an entire view hierarchy. This property is tintAdjustmentMode and
can be set to one of three values: UIViewTintAdjustmentModeNormal, UIViewTintAdjustmentModeDimmed or
UIViewTintAdjustmentModeAuto. To demonstrate the effects this has weve added a UISwitch and wired up
its valueChanged event to the following method:
Day 6: TintColor
1
2
3
4
5
6
7
8
45
- (IBAction)dimTintHandler:(id)sender {
if(self.dimTintSwitch.isOn) {
self.view.tintAdjustmentMode = UIViewTintAdjustmentModeDimmed;
} else {
self.view.tintAdjustmentMode = UIViewTintAdjustmentModeNormal;
}
[self updateProgressViewTint];
}
When you flick the switch youll see that all the regions which are usually the tint color, now dim to a gray
color. This is especially useful if you want to display a modal popup, and want to dim the background so as
not to detract attention from the content you want the user to be concentrating on.
@implementation SCSampleCustomControl {
UIView *_tintColorBlock;
UILabel *_greyLabel;
UILabel *_tintColorLabel;
}
6
7
8
9
10
11
12
13
14
15
16
- (id)initWithCoder:(NSCoder *)aDecoder
{
self = [super initWithCoder:aDecoder];
if(self)
{
self.backgroundColor = [UIColor clearColor];
[self prepareSubviews];
}
return self;
}
17
18
19
20
21
- (void)prepareSubviews
{
_tintColorBlock = [[UIView alloc] init];
_tintColorBlock.backgroundColor = self.tintColor;
Day 6: TintColor
46
[self addSubview:_tintColorBlock];
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
}
@end
This first chunk of code creates the three aforementioned elements, and sets their initial colors. Note that
since were being created from a story board, we need to set the sizes of each of our components inside
layoutSubviews:
1
2
3
4
- (void)layoutSubviews
{
_tintColorBlock.frame = CGRectMake(0, 0, CGRectGetWidth(self.bounds) / 3,
CGRectGetHeight(self.bounds));
6
7
8
9
10
frame = _tintColorLabel.frame;
frame.origin.x = CGRectGetWidth(self.bounds) / 3 + 10;
frame.origin.y = CGRectGetHeight(self.bounds) / 2;
_tintColorLabel.frame = frame;
11
12
13
14
15
So far weve done nothing new or clever - weve just built up a simple UIView subclass in code. The interesting
part comes now - when we override the new tintColorDidChange method:
Day 6: TintColor
1
2
3
4
5
47
- (void)tintColorDidChange
{
_tintColorLabel.textColor = self.tintColor;
_tintColorBlock.backgroundColor = self.tintColor;
}
All were doing here is setting the colors of the views we want to respect the tintColor.
And thats it. The tint color changing code in the view controller doesnt need to change. Because of the way
that tintColor works with the UIView hierarchy we dont have to touch anything else.
Day 6: TintColor
48
are set to transparent. This is ideal for adding image backgrounds to custom controls etc.
In this demo well show how to recolor the famous Shinobi ninja head logo.
Weve added UIImageView to our storyboard, and created an outlet called tintedImageView in the view
controller. Then in viewDidLoad we add the following code:
1
2
3
4
5
6
7
We first load the image, and then we call imageWithRenderingMode: to change the rendering mode to
UIImageRenderingModeAlwaysTemplate. Other options here are UIImageRenderingModeAlwaysOriginal and
UIImageRenderingModeAutomatic. The automatic version is default, in which case the mode will change
according to the context of the images use - e.g. tab bars, toolbars etc. automatically use their foreground
images as template images.
Once weve set the image mode to templated, we simply set it as the image for our image view, and set the
scaling factor to ensure the ninjas head doesnt get squashed.
Day 6: TintColor
49
Conclusion
On the surface tintColor seems a really simple addition to UIView, however, it actually represents some
incredibly powerful appearance customization functionality. If youre creating your own UIView subclasses
or custom controls, then I encourage you to make sure that you implement tintColorDidChange - itll make
your work a lot more in-line with the standard UIKit components.
In fact, there isnt really any generic snapshot code which can cope with every possible scenario.
This has all changed with iOS7 with new methods on UIView and UIScreen which allow easy snapshotting
for a variety of use cases.
51
Rotating Views
- (void)generateRotations
{
for (CGFloat angle = 0; angle < 2 * M_PI; angle += M_PI / 20.0) {
UIView *newView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 200, 250)];
newView.center = CGPointMake(CGRectGetMidX(self.bounds),
CGRectGetMidY(self.bounds));
newView.layer.borderColor = [UIColor grayColor].CGColor;
newView.layer.borderWidth = 1;
newView.backgroundColor = [UIColor colorWithWhite:0.8 alpha:0.4];
newView.transform = CGAffineTransformMakeRotation(angle);
newView.autoresizingMask = UIViewAutoresizingFlexibleHeight |
UIViewAutoresizingFlexibleWidth;
[self addSubview:newView];
}
}
52
In creating this view Im not suggesting that its the best way to create this effect, or indeed that it is useful,
but it does demonstrate a point.
In the view controller well create a couple of utility methods which well use repeatedly in this project. The
first creates one of these rotating views and adds it as a subview:
1
2
3
4
5
- (void)createComplexView
{
_complexView = [[SCRotatingViews alloc] initWithFrame:self.view.bounds];
[self.containerView addSubview:_complexView];
}
The second is a sample animation method, which animates a view supplied by reducing its size to (0,0):
1
2
3
4
5
6
7
8
9
10
11
12
13
- (void)animateViewAwayAndReset:(UIView *)view
{
[UIView animateWithDuration:2.0
animations:^{
view.bounds = CGRectZero;
}
completion:^(BOOL finished) {
[view removeFromSuperview];
[self performSelector:@selector(createComplexView)
withObject:nil
afterDelay:1];
}];
}
When the animation is complete it removes the supplied view, and then after a short delay resets the app by
recreating a new _complexView.
The following method is linked up to the toolbar button labelled Animate:
1
2
3
- (IBAction)handleAnimate:(id)sender {
[self animateViewAwayAndReset:_complexView];
}
The following picture demonstrates the problem that we have animating the rotating view weve created:
53
Animate
This problem definitely isnt insurmountable, but it would involve us changing the way SCRotatingViews is
constructed.
The new snapshotting methods come to the rescue here though. The following method is wired up to the
SShot toolbar button:
1
2
3
4
5
6
- (IBAction)handleSnapshot:(id)sender {
UIView *snapshotView = [_complexView snapshotViewAfterScreenUpdates:NO];
[self.containerView addSubview:snapshotView];
[_complexView removeFromSuperview];
[self animateViewAwayAndReset:snapshotView];
}
We call snapshotViewAfterScreenUpdates: to create a snapshot of our complex view. This returns a UIView
which represents the appearance of the view it has been called on. Its an incredibly efficient way of getting
a snapshot of the view - faster than the old method of making a bitmap representation.
Once weve got our snapshot view we add it to the container view, and remove the actual complex view. Then
we can animate the snapshot view:
54
Snapshot
- (void)recolorSubviews:(UIColor *)newColor
{
for (UIView *subview in self.subviews) {
subview.backgroundColor = newColor;
}
}
1
2
3
4
5
6
7
8
9
55
- (IBAction)handlePreUpdateSnapshot:(id)sender {
// Change the views
[_complexView recolorSubviews:[[UIColor redColor] colorWithAlphaComponent:0.3]];
// Take a snapshot. Don't wait for changes to be applied
UIView *snapshotView = [_complexView snapshotViewAfterScreenUpdates:NO];
[self.containerView addSubview:snapshotView];
[_complexView removeFromSuperview];
[self animateViewAwayAndReset:snapshotView];
}
10
11
12
13
14
15
16
17
18
19
- (IBAction)handlePostUpdateSnapshot:(id)sender {
// Change the views
[_complexView recolorSubviews:[[UIColor redColor] colorWithAlphaComponent:0.3]];
// Take a snapshot. This time, wait for the render changes to be applied
UIView *snapshotView = [_complexView snapshotViewAfterScreenUpdates:YES];
[self.containerView addSubview:snapshotView];
[_complexView removeFromSuperview];
[self animateViewAwayAndReset:snapshotView];
}
The methods are identical, apart from the argument to the snapshotViewAfterUpdates: method. Firstly
we call the recolorSubviews: method, then perform the same snapshot procedure we did in the previous
example. The following images show the difference in behavior of the 2 methods:
56
As expected, setting NO will snapshot immediately, and therefore doesnt include the result of the recoloring
method call. Setting YES allows the render loop to complete the currently queued changes before snapshotting.
Snapshotting to an image
When animating its actually far more useful to be able to snapshot straight to a UIView, however there
are times when its helpful to have an actual image. For example, we might want to blur the current
view before animating it away. There is another snapshotting method on UIView for this exact purpose:
drawViewHierarchyInRect:afterScreenUpdates:. This will allow you to draw the view into a core graphics
context, and hence you can get hold of a bitmap for the current view. Its worth noting that this method is
significantly less efficient than snapshotViewAfterScreenUpdates:, but if you need a bitmap representation
then this is the best way to go about it.
We wire the following method up to the Image toolbar button:
1
2
3
4
5
57
- (IBAction)handleImageSnapshot:(id)sender {
// Want to create an image context - the size of view and the scale of the screen
UIGraphicsBeginImageContextWithOptions(_complexView.bounds.size, NO, 0.0);
// Render our snapshot into the image context
[_complexView drawViewHierarchyInRect:_complexView.bounds afterScreenUpdates:NO];
7
8
9
10
11
12
13
14
15
16
17
18
19
20
];
}
Firstly we create an core graphics image context, the correct size and scale for the _complexView, and then
call the drawHierarchyInRect:afterScreenUpdates: method - the second argument being the same as the
argument to the previous snapshotting method.
Then we pull the graphics context into a UIImage, which we display in a UIImageView, with the same pattern
of replacing the complex view and animating it out. To demonstrate a possible reason for needing a UIImage
rather than a UIView weve created a method which blurs a UIImage:
1
2
3
4
5
6
7
8
9
10
11
12
13
This is a simple application of a CoreImage filter, and just applies a Gaussian filter and returns a new UIImage.
The following is a shot of the effect weve created:
58
Limitations
If youve ever tried to take a snapshot of a OpenGL-backed UIView youll know that it is quite an involved
process (users of ShinobiCharts might be familiar with the pain). Excitingly the new UIView snapshot methods
handle OpenGL seamlessly.
Because the snapshot methods create versions which respect the appearance of the views on-screen, they are
only able to snapshot views which are on-screen. This means its not possible to use these methods to create
snapshots of views which you want to animate into view - an alternative approach must be used. It also means
that if your view is clipped by the edge of the screen, then your snapshot will be clipped, as shown here:
59
Conclusion
Taking snapshots of UIView elements in iOS has always been really useful, and with iOS7 weve finally got a
sensible API method to allow us to take snapshots of views for most of the common purposes. That doesnt
mean that there arent and limitations - youll still need to use alternative approaches for some scenarios, but
90% of use cases just got a whole lot easier!
Usage
Using the Safari reading list is remarkably easy - there are just 3 methods of interest. A reading list item
consists of a URL, a title and a description. The only URLs which are acceptable are of type HTTP or HTTPS
- you can check the validity of a URL using the supportsURL: class method:
1
2
3
Once youve checked that the URL you want to add is valid adding it involves getting hold of the default
reading list and calling the add method:
1
2
3
4
5
6
7
8
9
10
11
Thats all there is to it! The pic below shows Safaris updated reading list:
61
Sample project
The sample project for this article pulls down the RSS feed from the ShinobiControls blog and displays them
in a table view. The detail page contains a toolbar button which allows the user to Read Later - i.e. add to
their Safari reading list.
Its worth noting that the entirety of the code were interesting in for this article is in the method called when
the button is pressed:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
- (IBAction)readLaterButtonPressed:(id)sender {
if([SSReadingList supportsURL:[self.detailItem url]]) {
SSReadingList *readingList = [SSReadingList defaultReadingList];
NSError *error;
[readingList addReadingListItemWithURL:[self.detailItem url]
title:[self.detailItem title]
previewText:[self.detailItem description]
error:&error];
if(error) {
NSLog(@"There was a problem adding to a reading list");
} else {
NSLog(@"Successfully added to reading list");
}
}
}
The point of the app isnt to demonstrate how to build an RSS parser, and as such the RSS feed is munged into
a JSON feed by Yahoo! pipes.
Conclusion
A pretty short article today, revealing one of the lesser noticed features of iOS7. It isnt groundbreaking, but
if your app has content which might be suitable for adding to the Safari reading list then its definitely worth
the 10 minutes it takes to add the functionality.
Vendor Identification
The closest replacement for uniqueIdentifier is another method on UIDevice - identifierForVendor, which
returns a NSUUID. This is shared between all apps from the same vendor on the same device. Different vendors
on the same device will return different identifierForVendor values, as will the same vendor across different
devices.
This value provides pretty much the same functionality from the point of view of the app developer, but
without the privacy concerns for the user.
It is worth noting that if a user uninstalls all apps for a specified vendor then the vendor ID will be destroyed.
When they install another app from that vendor a new vendor ID will be generated.
Advertising Identification
If you need a unique ID for the purposes of implementing in-app advertising (irrespective of whether it
is iAd or not) then an alternative approach is required. The AdSupport module includes a class called
ASIdentifierManager which has a advertisingIdentifier method. This returns a NSUUID which may be
used for the purposes of tracking advertising. There is also a method advertisingTrackingEnabled, which
returns a BOOL specifying whether or not a user has allowed advertising tracking. If the return value is NO then
there is a short list of things that the app is allowed to use the ID for - none of which involves tracking users.
The advertising ID is unique across an entire device - so that if tracking is enabled ads can be tailored to the
specific user. More often than not an app developer wont have to interact with this class, but will instead
drop in an ad-serving framework which will use the ASIdentifierManager class behind the scenes.
Network Identification
When uniqueIdentifier was deprecated, using the devices MAC address became popular. A MAC address
is a unique identifier allocated to every piece of networking equipment in the world - from WiFi adaptors
63
to datacenter switches. Its possible to query an iOS device for its MAC address, which will be both unique
and persistent - so ideal for tracking. However, with iOS7, Apple have made it impossible to obtain the MAC
address programmatically on an iOS device - in fact a constant will be returned: 02:00:00:00:00:00. This
closes this loophole and will drive developers to the Apple-preferred device identification approaches.
Who Am I?
Conclusion
Apple are stamping out the alternatives to device identification, so nows the time to adopt their chosen
approach. This offers greater privacy for the end user, so its a good thing to do.
The attached sample project with this post (WhoAmI) gives a brief demo of the different approaches weve
outlined here.
- (id<UIViewControllerAnimatedTransitioning>)navigationController:
animationControllerForOperation:
fromViewController:
toViewController:
This method will get called every time the navigation controller is transitioning between view controllers
(whether through code or through a segue in a story board). We get told the view controller were transitioning
from and to, so at this point we can make a decision what kind of transition we need to return.
We create a class which will act as the nav controller delegate:
1
2
1
2
3
4
5
6
7
8
9
10
65
@implementation SCNavControllerDelegate
- (id<UIViewControllerAnimatedTransitioning>)
navigationController:(UINavigationController *)navigationController
animationControllerForOperation:(UINavigationControllerOperation)operation
fromViewController:(UIViewController *)fromVC
toViewController:(UIViewController *)toVC
{
return [SCFadeTransition new];
}
@end
We want all of our transitions to be the same (whether forward or backward) and therefore we can just return
an SCFadeTransition object for every transition. Well look at what this object is and does in the next section.
Setting this delegate is simple - and the same as we see all over iOS:
1
2
3
4
5
6
7
8
9
- (id)initWithCoder:(NSCoder *)aDecoder
{
self = [super initWithCoder:aDecoder];
if(self) {
_navDelegate = [SCNavControllerDelegate new];
self.delegate = _navDelegate;
}
return self;
}
1
2
66
- (NSTimeInterval)transitionDuration:
(id<UIViewControllerContextTransitioning>)transitionContext
{
return 2.0;
}
When the animateTransition: method is called we get provided with an object which conforms to the
UIViewControllerContextTransitioning protocol, which gives us access to all the bits and pieces we need
to complete the animation. The first method well use is viewControllerForKey: which allows us to get hold
of the two view controllers involved in the transition:
1
2
3
4
5
The context also provides us with a UIView in which to perform the animations, and this is accessible through
the containerView method:
1
2
We need to make sure that the views associated with each of the view controllers is a subview of the container
view. Its likely that the view were transitioning from is already a subview, but we ensure it:
1
2
3
We dont want to see the view were transitioning to, so we should set its alpha to 0:
1
toVC.view.alpha = 0.0;
Now were in a position to perform the animation. Since were doing a simple fade between the two view
controllers, we can use a UIView animation block:
1
2
3
4
5
6
7
8
9
10
11
12
67
Points to note: - We set the duration to be the same as the transitionDuration: method weve implemented.
- The view associated with the from view controller needs to be removed from the view hierarchy once the
transition is completed. - The completeTransition: method on the transition context needs to be called once
weve finished the animation so that the OS knows that weve finished.
Summary
With that were done! Its actually quite simple once you get your head around the protocols. The only thing
we had to do with any of our existing view controller code was to set the delegate on the navigation view
controller. The rest of the work was implemented with classes which set a transition object, and then perform
the animation itself.
As ever, the code is available on GitHub. Happy transitioning!
The animation methods in UIView have allowed animation of animatable properties (such as transform,
backgroundColor, frame, center etc) - by setting an end-state, duration and other options such as animation
curve. However, setting intermediate states in the animation, so-called key-frames, has not been possible.
In this case it was necessary to drop down to CoreAnimation itself and create a CAKeyFrameAnimation. This
changes in iOS7 - with the addition of 2 methods to UIView, key-frame animations are now supported without
dropping down to CoreAnimation.
To show how to use UIView key-frame animations were going to create a couple of demos which use it. The
first is an animation which changes the background color of a view through the colors of the rainbow, and
the second demonstrates a full 360 degree rotation of a view, specifying the rotation direction.
Rainbow Changer
UIView key-frame animations require the use of 2 methods, the first of which is similar to the other blockbased animation methods: animateKeyframesWithDuration:delay:options:animations:completion:. This
takes floats for duration and delay, a bit-mask for options and blocks for animation and completion - all pretty
standard in the world of UIView animations. The difference comes in the method we call inside the animation
block: addKeyframeWithRelativeStartTime:relativeDuration:animations:. This method is used to add
the fixed points within the animation sequence.
The best way to understand this is with a demonstration. We are going to create an animation which animates
the background color of a UIView through the colors of the rainbow (before we start a flamewar about what
the colors of the rainbow are, Ive made an arbitrary choice, which happens to be correct). Well trigger this
animation on a bar button press, so we add a bar button in the storyboard, and wire it up to the following
method:
1
2
- (IBAction)handleRainbow:(id)sender {
[self enableToolbarItems:NO];
3
4
5
6
void (^animationBlock)() = ^{
// Animations here
};
7
8
9
[UIView animateKeyframesWithDuration:4.0
delay:0.0
69
options:UIViewAnimationOptionCurveLinear |
UIViewKeyframeAnimationOptionCalculationModeLinear
animations:animationBlock
completion:^(BOOL finished) {
[self enableToolbarItems:YES];
}];
10
11
12
13
14
15
16
- (void)enableToolbarItems:(BOOL)enabled
{
for (UIBarButtonItem *item in self.toolbar.items) {
item.enabled = enabled;
}
}
Well take a look at some of the options available when performing key-frame animations later - right now
lets fill out that animation block:
1
2
3
4
5
6
7
void (^animationBlock)() = ^{
NSArray *rainbowColors = @[[UIColor
[UIColor
[UIColor
[UIColor
[UIColor
[UIColor
orangeColor],
yellowColor],
greenColor],
blueColor],
purpleColor],
redColor]];
9
10
11
12
13
14
15
16
17
18
};
We start by creating an array of the colors we want to animate through, before looping through each of them.
For each color we call the method to add a key-frame to the animation:
1
2
3
4
5
70
[UIView addKeyframeWithRelativeStartTime:i/(CGFloat)colorCount
relativeDuration:1/(CGFloat)colorCount
animations:^{
self.rainbowSwatch.backgroundColor = rainbowColors[i];
}];
For each key-frame we specify a start time, a duration and an animation block. The times are relative - i.e.
we specify them as floats in the range (0,1), and they will get scaled appropriately to match the animation
duration. Here we want the color changes to be evenly spaced throughout the animation, so we set the relative
start time of each animation to be the index of the current color over the total number of colors, and the
relative duration to be the 1 over the total number of colors. The animation block specifies the end state of
the animation, in the same manner it does for all UIView block-based animations, so here we just need to set
the background color.
If you run the app up now and press the Rainbow button then youll see your first UIView key-frame
animation in action.
71
1
2
3
4
5
UIViewKeyframeAnimationOptionCalculationModeLinear
UIViewKeyframeAnimationOptionCalculationModeDiscrete
UIViewKeyframeAnimationOptionCalculationModePaced
UIViewKeyframeAnimationOptionCalculationModeCubic
UIViewKeyframeAnimationOptionCalculationModeCubicPaced
The graph below shows how the different options control the animation. The horizontal axis represents the
time of the animation, whereas the vertical axis represents a one-dimensional parameter we are animating
(this could be for example the alpha of a view, or the width of a frame). We have specified 3 key-frames in
this example, each with different durations and end values.
optionsGraph
72
I would suggest that other than discrete, its worth playing around with the different options in your specific
example. Since the algorithms are complete black-boxes, and you have no control over their parameters, trying
to fully understand their operation is somewhat futile. An empirical approach to option selection will be more
fruitful in this case (this isnt usually true - its good to understand what the different options actually mean
rather than guessing).
Rotation Directions
As a bonus example, were also going to take a look at how to perform full rotations of views, specifying
direction. When you specify an animation then CoreAnimation will animate the shortest route from the start
state to the end state. Therefore with rotation transforms, we can only specify the start angle and the end
angle, but not the direction in which it will rotate. With key-frame animations we can overcome this by
specifying some intermediate states.
Therefore for a full-rotation clockwise we can write the following method:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
- (IBAction)handleRotateCW:(id)sender {
[self enableToolbarItems:NO];
[UIView animateKeyframesWithDuration:2.0
delay:0.0
options:UIViewKeyframeAnimationOptionCalculationModeLinear
animations:^{
[UIView addKeyframeWithRelativeStartTime:0.0
relativeDuration:1/3.0
animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(2.0 * M_PI / 3.0);
}];
[UIView addKeyframeWithRelativeStartTime:1/3.0
relativeDuration:1/3.0
animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(4.0 * M_PI / 3.0);
}];
[UIView addKeyframeWithRelativeStartTime:2/3.0
relativeDuration:1/3.0
animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(0);
}];
22
}
completion:^(BOOL finished) {
[self enableToolbarItems:YES];
}];
23
24
25
26
27
We perform a key-frame animation with 3 states, equally spaced throughout the animation duration. We start
with a rotation angle of 0, so next we move to 2/3, 4/3 before finishing back at 0. In order to completely
73
specify a rotation of 2, then we need to have exactly 2 intermediate fixed points, since as soon as there is an
angle difference of greater than then it will rotate in the opposite direction to that youd like. At an angle
difference of exactly , the behavior will be undefined.
In order to change the direction of rotation we can just reverse the key-frames i.e. starting at an angle of 0 we
then move to 4/3, followed by 2/3 before finishing back at 0:
1
2
3
4
5
6
7
8
9
10
11
12
13
- (IBAction)handleRotateCCW:(id)sender {
[self enableToolbarItems:NO];
[UIView animateKeyframesWithDuration:2.0
delay:0.0
options:UIViewKeyframeAnimationOptionCalculationModeLinear
animations:^{
[UIView addKeyframeWithRelativeStartTime:0.0
relativeDuration:1/3.0
animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(4.0 * M_PI / 3.0);
}];
[UIView addKeyframeWithRelativeStartTime:1/3.0
relativeDuration:1/3.0
74
animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(2.0 * M_PI / 3.0);
}];
[UIView addKeyframeWithRelativeStartTime:2/3.0
relativeDuration:1/3.0
animations:^{
self.rotatingHead.transform = CGAffineTransformMakeRotation(0);
}];
14
15
16
17
18
19
20
21
22
}
completion:^(BOOL finished) {
[self enableToolbarItems:YES];
}];
23
24
25
26
27
The Shinobi ninja head can now rotate in either a clockwise or a counter-clockwise direction - without having
to drop down to CoreAnimation layers.
Conclusion
UIView animations have always been a high-level way to perform simple animations on views, and have
benefited from being exceptionally simple to understand and build. Now, with the addition of key-frame
animations, more complex animations can now benefit from the same simple API. This post has demonstrated
how powerful it can be - with some trivial examples (although choosing a direction for rotation is a common
request).
Dynamic Type
Dynamic type is a concept of allowing users to specify how large the typeface is in the apps on their device.
This isnt simply the ability to alter the font size, but also alter other properties of the type such as the kerning
and the line-spacing. This ensures that the text is the most readable as it can be at the different type sizes. In
order to do this you no longer specify particular fonts for your different text elements, but instead set what
they semantically represent, i.e. rather than specifying Helvetica 11pt, you would set the type to be body text.
This is in-line with the way in which something like HTML works - semantic markup of your text, allowing
the user to control the appearance. As such, rather than specifying fonts per-se, there is a new class method
on UIFont which will pull out the correct font:
1
UIFontTextStyleHeadline
UIFontTextStyleBody
UIFontTextStyleSubheadline
UIFontTextStyleFootnote
UIFontTextStyleCaption1
UIFontTextStyleCaption2
As well as being able to specify the font via code, you can set it using interface builder:
76
When combined with autolayout, using dynamic type means that a user can control the appearance of the
text inside your app. There is a Text Size options screen within the settings screens which allows changing
of the type size:
77
Changing Size
There are a total of 7 different font sizes - the following shots demonstrate some of them:
78
In future OS updates the specific font might change as the appearance of the operating system develops, but
by adopting dynamic type you can be assured that your app will both be accessible and match the OS style
with no further work down the line.
Font Descriptors
Another addition which TextKit brings in is the concept of font descriptors. These are much more in-line with
the way were used to thinking of fonts - where we can modify a font, as opposed to having to completely
specify a new one. For example, we have some text wed like to make the same font as our body text, but wed
like to make it bold. Previously in iOS we would have had to know the font being used for the body text, and
then find its bold equivalent, and then construct a new font object using fontWithName:size: with the string
name of the bold equivalent of the body font.
This isnt very intuitive, and with the introduction of dynamic type, its not always possible to know exactly
which font youre using. Font descriptors make this a lot easier to use - as a collection of attributes about a
font its possible to change attributes and hence change the font. For example, if we would like to get a bold
version of the body text font:
1
2
3
4
5
First we get the descriptor for the body text style, and then using the fontDescriptorWithSymbolicTraits:
method we can override a so-called font trait. Then the UIFont method fontWithDescriptor:size: can be
used to actually get the required font - noting that setting the size: parameter to 0.0 will result in returning
the font sized as determined in the font descriptor.
79
This is an example of modifying a UIFontDescriptor using using a font trait, other examples of which are as
follows:
UIFontDescriptorTraitItalic
UIFontDescriptorTraitExpanded
UIFontDescriptorTraitCondensed
Its also possible to specify other features of the font appearance (such as the type of serifs) using attributes.
Have a read of the documentation of UIFontDescriptorSymbolicTraits for more information.
As well as modifying a existing font descriptor, you can create a dictionary of attributes and then find a font
descriptor which matches your request. For example:
1
2
3
4
5
6
UIFontDescriptor *scriptFontDescriptor =
[UIFontDescriptor fontDescriptorWithFontAttributes:
@{UIFontDescriptorFamilyAttribute: @"Zapfino",
UIFontDescriptorSizeAttribute: @15.0}
];
self.scriptTextLabel.font = [UIFont fontWithDescriptor:scriptFontDescriptor size:0.0];
Were here specifying a font with a given family and size in a dictionary of attributes. Other attributes which
can be used include:
UIFontDescriptorNameAttribute
UIFontDescriptorTextStyleAttribute
UIFontDescriptorVisbileNameAttribute
UIFontDescriptorMatrixAttribute
This list is not exhaustive - UIFontDescriptor is incredibly powerful and brings iOS inline with many other
text rendering engines used elsewhere.
80
Font Descriptor
Conclusion
Dynamic type is an incredibly useful tool to improve both the appearance and accessibility of your app. When
combined with autolayout it allows user content to be beautiful and easily readable. Font descriptors offer a
much easier way to work with fonts - much closer to the concept we hold in our heads from years of using
word processing software. It should make working with fonts a lot less painful. Weve only seen the tip of
the iceberg here today - type rendering is a complex concept, and with these new concepts iOS is providing
much easier access to the underlying engine.
Requesting Directions
There are quite a lot of different classes which we need in MapKit, but its pretty simple to work through
them in turn. In order to query Apples servers for a set of directions, we need to encapsulate the details
in a MKDirectionsRequest object. This class has existed since iOS6 for use by apps which were capable of
generating their own turn-by-turn directions, but have been expanded in iOS7 to allow developers to request
directions from Apple themselves.
1
In order to make a request we need to set the source and the destination, both of which are MKMapItem objects.
These are objects which represent a location on a map, including its position and other metadata such as
name, phone number and URL. There are a couple of options for creating these - one of which is to use the
users current location:
1
When the user fires up the app for the first time they will then be asked for permission to use their current
location:
82
Allow location
You can also create a map item using a specific location using the initWithPlacemark: method, which brings
us on to another MapKit class. MKPlacemark represents the actual location on a map - i.e. its latitude and
longitude. We could use a reverse geo-coder from CoreLocation to generate a placemark, but since thats not
the point of this post, were going to create a placemark for some fixed coordinates. Putting all this together
we can complete setting up our MKDirectionsRequest object.
1
2
3
4
5
6
7
8
9
There are some other optional properties on a MKDirectionsRequest which can be used to control the route
were going to be sent back:
83
departureDate and arrivalDate. Setting these values will enable the returned routes to be optimized
for the time of day for travel - e.g. allowing for standard traffic conditions.
transportType. Currently Apple can provide either walking or driving directions using the enum values
MKDirectionsTransportTypeAutomobile or MKDirectionsTransportTypeWalking. The default value is
MKDirectionsTransportTypeAny.
requestsAlternateRoutes. If the routing server can find more than one reasonable route then setting
this property to YES will enable this. Otherwise it will just return one route.
Now that weve got a valid directions request we can send it off to get a route. This is done using the
MKDirections class - which has a constructor which takes a MKDirectionsRequest object:
1
There are 2 methods which can be used: calculateETAWithCompletionHandler: estimates the time a route
will take, whereas calculateDirectionsWithCompletionHandler calculates the actual route. Both of these
methods are asynchronous, and take completion handling blocks. MKDirections objects also have a cancel
method, which does as suggested for any currently running requests, and a calculating property which is
true when there is currently a request in progress. A single MKDirections object can only run a single request
at once - additional requests will fail. If you want to run multiple simultaneous requests then you can have
more than one MKDirections object, but be aware that asking for too many might well result in receiving
throttling errors from Apples servers.
1
2
3
4
Directions Response
The response from Apples server is returned to us as a MKDirectionsResponse object, which as well as the
source and destination, contains an array of MKRoute objects. Note that this array will contain just one object
unless we set requestsAlternateRoutes to YES on the request.
MKRoute objects, as their name suggests, represents a route between two points which a user can follow. It
84
9
10
11
12
}];
We have a created a utility method to plot a route on the map, which well take a look at in the next section.
Rendering a Polyline
Weve been sent a polyline of the route, which we want to plot on the map. iOS7 changes the way in which
we plot overlays on the map, with the introduction of a MKOverlayRenderer class. If we want to do custom
shapes, or a non-standard rendering technique then we can subclass to create our own renderer, however,
there are a set of overlay renderers for standard use cases. We want to render a polyline, so we can use the
MKPolylineRenderer. Well look in a second at when and where to create our renderer, but lets take a look
at the plotRouteOnMap: method we referred to in the previous section.
An MKPolyline is an object which represents a line made from multiple segments, and adopts the MKOverlay
protocol. This means that we can add it as an overlay to an MKMapView object, using the addOverlay: method:
1
2
3
4
5
- (void)plotRouteOnMap:(MKRoute *)route
{
if(_routeOverlay) {
[self.mapView removeOverlay:_routeOverlay];
}
7
8
9
10
11
12
85
This method takes an MKRoute object and adds the polyline of the route as an overlay to the MKMapView
referenced to by the mapView property. We have an ivar _routeOverlay which we use to keep a reference to
the polyline. This means that when the method is called we can remove an existing route, and replace it with
the new one instead.
Although weve now added the overlay to the map view, it wont yet be drawn. This is because the map
doesnt know how to draw this overlay object - and this is where the new MKOverlayRenderer class comes in.
When an overlay is present on a map view, the map view will ask its delegate for a renderer to draw it. Then,
as the user zooms and pans around the map the renderer will be asked to draw the overlay at in the different
map states.
We need to adopt the MKMapViewDelegate protocol, and implement the following method to provide the map
view with a renderer for our polyline:
1
2
3
4
5
6
7
8
Weve got a somewhat simplified situation here where we know that there will only be one overlay, and it will
be of type MKPolyline, and therefore dont require any code to decide what renderer to return. We create a
MKPolylineRenderer, which is a subclass of MKOverlayRenderer whose purpose is to draw polyline overlays.
We set some simple properties (strokeColor and lineWidth) so that we can see the overlay, and then return
the new object.
All that remains is setting the delegate property on the map view so that this delegate method is called when
the overlay is added to the map:
1
2
3
4
5
6
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
self.mapView.delegate = self;
}
86
Polyline Overlay
Route steps
As well as the polyline representing the route, were also provided with an array of MKRouteStep objects which form the turn-by-turn directions a user should follow to travel along the route. MKRouteStep objects
have the following properties:
polyline: Much the same as the route has a polyline, each step has a line which can be used to show
this section of the route on a map.
instructions: A string which gives the details of what the user should do to follow this section of the
route.
notice: Any useful information regarding this section of the route.
distance: Measured in metres.
transportType: Its not unreasonable that routes comprise multiple modes of transport, so each step
should have its own transport type.
In the RouteMaster app accompanying todays post we populate a table view with the list of steps, and then
show a new map view with the map for the section when requested.
87
Building RouteMaster
Weve now discussed the process we used to request directions and the response we get, but not given many
details about the app which accompanies todays post. Even though it doesnt really demonstrate any further
details of MapKit, its worth having a quick look at how the app is constructed.
This app isnt especially useful since it only determines the route from your current location to the White
House in Washington DC. The app is built using a storyboard, and is based around a navigation controller.
The following are the view controllers which make up the app:
SCViewController. Main screen. Allows the user to kick off the routing request and when a response is
received plots the entire route on the embedded map view. It contains a button (which appears when a
route has been received) to view the route details. This pushes the next view controller onto the stack.
SCStepsViewController. This is a UITableViewController, which displays a cell for each of the steps
in the route. Selecting one of these cells will push the final view controller onto the stack:
SCIndividualStepViewController. This displays the details of a specific step, including a map, the
distance, and the instructions provided by the routing server.
Since were using storyboards, we override the prepareForSegue:sender: method in each of our view
controllers to provide the next view controller with the data it needs to display. For example, we set the
route property (of type MKRoute) of SCStepsViewController which we set as we segue from the main view
controller:
1
2
3
4
5
6
7
8
88
Similarly, the SCIndividualStepViewController has a routeStep property (of type MKRouteStep), which we
set as we transition from the table of steps:
1
2
3
4
5
6
7
9
10
11
12
13
14
15
Since the individual step view controller contains an MKMapView we add the polyline as an overlay in exactly
the same we did for the main view controller.
The rest of the app is pretty self-explanatory, and if you run it up you should be provided with the best route
from your current location (or simulated equivalent in the simulator) to the White House. You can change the
simulated location in the Debug menu in Xcode, although it only seems to be possible to get routing results
for a start location within the continental US (seems reasonable - driving across the Atlantic isnt that easy).
89
Simulate Location
Maybe not the most useful app, but with a sprinkling of CoreLocation, you could make your own directions
app without too much difficulty.
Conclusion
MapKit is starting to mature a little in iOS7 with the addition of some really useful APIs. The directions API
is fairly easy to use, despite the plethora of different classes, and returns results which are really easy to work
with in an app. All we need now is the constant improvement in Apples mapping back-end to continue so
that the results we provide to users are sensible.
We define a dismissal property which is going to determine which direction the card flip will go in.
As before we need to implement 2 methods:
1
2
3
4
5
6
7
- (void)animateTransition:(id<UIViewControllerContextTransitioning>)transitionContext
{
// Get the respective view controllers
UIViewController *fromVC = [transitionContext
viewControllerForKey:UITransitionContextFromViewControllerKey];
UIViewController *toVC = [transitionContext
viewControllerForKey:UITransitionContextToViewControllerKey];
8
9
10
11
12
13
// Get
UIView
UIView
UIView
the views
*containerView = [transitionContext containerView];
*fromView = fromVC.view;
*toView = toVC.view;
91
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
- (NSTimeInterval)transitionDuration:
(id<UIViewControllerContextTransitioning>)transitionContext
{
return 1.0;
}
The animation method looks quite complicated, but in reality it just uses the new UIView keyframe animations
we looked at on day 11. The important part to note is that the dismissal property is used to determine in
92
which direction the rotation will be performed. Other than that, the animation is pretty straight forward, and
we wont go into detail here. For more information check out custom view controller transitions on day 10
and UIView key-frame animations on day 11.
Now that we have an animation object we have to wire it into our view controller transitions. We have created
a storyboard which contains 2 view controllers. The first contains a button which triggers a segue to present
the modal view controller, and the second contains a button which dismisses the modal view controller via
the following method:
1
2
3
- (IBAction)handleDismissPressed:(id)sender {
[self dismissViewControllerAnimated:YES completion:NULL];
}
If we run up the app now then we can see the standard transition animation to present and dismiss a modal
view controller. There is a standard flip transition which would could use, but were interested in using custom
animations, so lets add our custom transition animation.
1
2
3
4
93
5
6
@implementation SCViewController
7
8
9
10
11
12
13
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
_flipAnimation = [SCFlipAnimation new];
}
14
15
16
17
18
19
20
21
22
- (id<UIViewControllerAnimatedTransitioning>)
animationControllerForPresentedController:(UIViewController *)presented
presentingController:(UIViewController *)presenting
sourceController:(UIViewController *)source
{
_flipAnimation.dismissal = NO;
return _flipAnimation;
}
23
24
25
26
27
28
29
- (id<UIViewControllerAnimatedTransitioning>)
animationControllerForDismissedController:(UIViewController *)dismissed
{
_flipAnimation.dismissal = YES;
return _flipAnimation;
}
Its important to note that the difference between the present and dismiss methods is the setting of the
dismissal property on the animation - which determines which direction the flip will take. All that is left
to do is to set this as the transitioning delegate on the appropriate view controller. Since were talking about
presenting and dismissing a view controller, these methods both refer to the modal view controller, and so
the delegate must be set on this controller. Since the modal view controller is being created by the storyboard
segue process, we can set this in the prepareForSegue:sender: method:
1
2
3
4
5
6
7
8
9
94
If you run the app up now, then you should see that weve replaced the original slide animation with our
custom vertical card-flip animation.
Interactive transitioning
There are 2 more methods on the UIViewControllerTransitioningDelegate protocol, both of which return
an object which implements the UIViewControllerInteractiveTransitioning protocol, which are provided
to support interactive transitioning. We could go ahead an create an object which implements this ourselves,
but Apple has provided a concrete class in the form of UIPercentDrivenInteractiveTransition which covers
the majority of use cases.
The concept of an interactor (i.e. an object whichs conform to UIViewControllerInteractiveTransitioning)
is that it controls the progress of an animation (which is provided by an object conforming to the
UIViewControllerAnimatedTransitioning protocol). The UIPercentDrivenInteractiveTransition class
provides methods to enable specifying the current progress of the animation as a percentage, as well as
cancelling and completing the animation.
This will all become a lot clearer once we see how it all fits with our project. We want to create a pan
gesture, which as the user drags vertically, will control the transition of presenting/dismissing the modal
view controller. Well create a subclass of UIPercentDrivenInteractiveTransition which has the following
properties:
95
2
3
4
5
6
7
8
@end
The gesture recognizer is as weve already discussed, we also provide a property for determining whether or
not an interaction is in progress, and finally a property which specifies the presenting view controller. Well
see why we need this later on, but for now we need to observe that it adopts the following simple protocol:
1
2
3
@interface SCFlipAnimationInteractor ()
@property (nonatomic, strong, readwrite) UIPanGestureRecognizer *gestureRecogniser;
@property (nonatomic, assign, readwrite) BOOL interactionInProgress;
@end
5
6
7
8
9
10
11
12
13
14
15
16
@implementation SCFlipAnimationInteractor
- (instancetype)init
{
self = [super init];
if (self) {
self.gestureRecogniser = [[UIPanGestureRecognizer alloc]
initWithTarget:self action:@selector(handlePan:)];
}
return self;
}
@end
Firstly we need to redefine 2 of the properties as internally read-write, and at construction time we create the
gesture recognizer and set its target to an internal method. Notice that we dont attach it to any views at this
stage - we have provided this as a property so that we can do this externally.
The pan handling method is as follows:
1
2
3
4
5
6
7
8
9
96
- (void)handlePan:(UIPanGestureRecognizer *)pgr
{
CGPoint translation = [pgr translationInView:pgr.view];
CGFloat percentage = fabs(translation.y / CGRectGetHeight(pgr.view.bounds));
switch (pgr.state) {
case UIGestureRecognizerStateBegan:
self.interactionInProgress = YES;
[self.presentingVC proceedToNextViewController];
break;
10
case UIGestureRecognizerStateChanged: {
[self updateInteractiveTransition:percentage];
break;
}
11
12
13
14
15
case UIGestureRecognizerStateEnded:
if(percentage < 0.5) {
[self cancelInteractiveTransition];
} else {
[self finishInteractiveTransition];
}
self.interactionInProgress = NO;
break;
16
17
18
19
20
21
22
23
24
case UIGestureRecognizerStateCancelled:
[self cancelInteractiveTransition];
self.interactionInProgress = NO;
25
26
27
28
default:
break;
29
30
31
32
This is a fairly standard gesture recognizer handling method, with cases for the different recognizer states.
Before we start the switch we calculate the percentage complete - i.e. given how far the gesture has travelled,
how complete do we consider the transition to be. Then the switch behaves as follows:
Began Here we set that the interaction is currently in progress, and use the method we added to our
presentingViewController to begin the transition. This is important - were using the gesture to begin
the transition. The interactor isnt currently being used other than for handling the gesture because
there is no transition occurring. Once weve called this method on the view controller (provided we
implement it correctly) a transition will begin and the interactor will begin performing its animation
control job.
Changed We must now be in the middle of an interactive transition (since we started one when the
gesture began) and therefore we just call the method provided by our superclass to specify how complete
97
our transition is updateInteractiveTransition:. This will set the current transition appearance to be
as if the animation is the specified proportion complete.
Ended When a gesture ends we need to decide whether or now we should finish the transition or cancel it. We call the helper methods provided by the superclass to cancel the transition
(cancelInteractiveTransition) if the percentage is lower than 0.5 and complete the transition
(finishInteractiveTransition) otherwise. We also need to update our in-progress property since the
transition is finished.
Canceled If canceled then we should cancel the transition and update the interactionInProgress
property.
That completes all the code that we need in the interactor - all that remains is to wire it all up.
Firstly lets add the new methods for interactive transitions on the UIViewControllerTransitioningDelegate,
which is our primary view controller:
1
2
3
4
5
- (id<UIViewControllerInteractiveTransitioning>)interactionControllerForPresentation:
(id<UIViewControllerAnimatedTransitioning>)animator
{
return _animationInteractor.interactionInProgress ? _animationInteractor : nil;
}
6
7
8
9
10
11
- (id<UIViewControllerInteractiveTransitioning>)interactionControllerForDismissal:
(id<UIViewControllerAnimatedTransitioning>)animator
{
return _animationInteractor.interactionInProgress ? _animationInteractor : nil;
}
These are both identical (for presentation and dismissal). We only want to return an interactor if were
performing an interactive transition - i.e. if a user clicked on the button rather than by panning then we
should perform a non-interactive transition. This is the purpose of the interactionInProgress property on
our interactor. Were returning an ivar _animationInteractor here, which we set up in viewDidLoad:
1
2
3
4
5
6
7
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
_animationInteractor = [SCFlipAnimationInteractor new];
_flipAnimation = [SCFlipAnimation new];
}
When we created the gesture recognizer in the interactor, we didnt actually add it to a view, so we can do
that now, in our view controllers viewDidAppear:
1
2
3
4
5
6
7
8
98
- (void)viewDidAppear:(BOOL)animated
{
// Add the gesture recogniser to the window first render time
if (![self.view.window.gestureRecognizers
containsObject:_animationInteractor.gestureRecogniser]) {
[self.view.window addGestureRecognizer:_animationInteractor.gestureRecogniser];
}
}
We normally add gesture recognizers to views, but here were adding it to the window object instead. This
is because as the animation occurs, the view controllers view will move, and hence the gesture recognizer
wont behave as expected. Adding it to the window instead will ensure the behavior we expect. If we were
performing a navigation controller transition instead we could add the gesture to the navigation controllers
view. The gesture recognizer is added in viewDidAppear: since at this point the window property is set correctly.
The final piece of the puzzle is to set the presentingVC property on the interactor. In order to do this we need
to make our view controllers implement the SCInteractiveTransitionViewControllerDelegate protocol.
On our main view controller this is pretty simple:
1
2
3
4
5
6
7
8
9
10
11
12
And now we have implemented the required method we can set the correct property on the interactor in the
viewDidAppear. This will ensure that it is set correctly every time the primary view controller is displayed,
whether it be on the first display or when the modal view controller is dismissed:
1
2
3
4
5
6
- (void)viewDidAppear:(BOOL)animated
{
...
// Set the recipeint of the interactor
_animationInteractor.presentingVC = self;
}
So, when the user starts the pan gesture, the interactor will call proceedToNextViewController on the primary
view controller, which will kick off the segue to present the modal view controller - this is exactly what we
want!
99
To perform the same operation on the modal view controller it must have a reference to the interactor as well
(so that it can update the presentingVC property):
1
2
3
4
...
5
6
7
8
@end
We set this property in the prepareForSegue: method on the main view controller:
1
2
3
4
5
6
7
8
- (void)proceedToNextViewController
{
[self dismissViewControllerAnimated:YES completion:NULL];
}
And finally, once the modal view controller has appeared then we need to update the property on the interactor
to make sure that the next time an interactive transition is started (i.e. the user begins a vertical pan) it calls
the method on the modal VC, not the main one:
1
2
3
4
5
6
- (void)viewDidAppear:(BOOL)animated
{
// Reset which view controller should be the receipient of the
// interactor's transition
self.interactor.presentingVC = self;
}
And thats it. If you run the app up now and drag vertically youll see that the transition to show the modal
view controller will follow your finger. If you drag further than half way and let go then the transition will
complete, otherwise it will return to its original state.
100
Conclusion
Interactive view controller transitions can appear to be quite a complicated topic - primarily due to the
vast array of different protocols that you need to implement, and also because its not immediately obvious
which bits pieces of the puzzle should be responsible for what (e.g. who should own the gesture recogniser?).
However, in reality, weve got some really quite powerful functionality for a small amount of code. I encourage
you to give this custom view controller transitions a try, but be aware, with great power comes great
responsibility - just because we can now do lots of whacky transitions between view controllers we should
ensure that we dont overcomplicate the UX for our app users.
Using filters is really simple - they can even be chained together, but for our purposes we just want to specify
a single filter:
1
2
A CoreImage filter is represented by the CIFilter class, which has a factory method to create a specific
filter object. These filter object then use KVC to specify the relevant filter arguments. All of the new photoeffect filters take just a single argument - the input image, which is specified using the string constand
kCIInputImageKey.
We can then turn this back into a UIImage for display in a UIImageView:
1
The new photo-effect filters are referenced with the following strings:
1
2
3
4
5
6
7
8
102
@"CIPhotoEffectChrome"
@"CIPhotoEffectFade"
@"CIPhotoEffectInstant",
@"CIPhotoEffectMono"
@"CIPhotoEffectNoir"
@"CIPhotoEffectProcess"
@"CIPhotoEffectTonal"
@"CIPhotoEffectTransfer"
In the app which accompanies todays post we have a collection view which demonstrates the output of each
of the new filters on a single input image. Since we dont have loads of images, we process the images up-front,
to preserve the scrolling performance we expect from iOS.
This also requires that we construct CGImage versions of each of the CIImage filter outputs. This is because
the outputImage property is generated lazily. To do this, we use a CIContext to draw the CIImage into a
CoreGraphics context:
1
2
3
4
5
6
7
8
[images addObject:image];
The rest of the code in the SCPhotoFiltersViewController is the boilerplate code required to run a collection
view with custom cells. If you run up the app you can see the different filtered results:
103
QR Code Generation
In addition to the photo effect filters iOS7 also introduces a filter which is capable of generating QR
codes to represent a specific data object. In the sample app the second tab (SCQRGeneratorViewController)
demonstrates this functionality - when the Generate button is pressed then the content of the text field is
encoded in a QR code, displayed above.
The method which creates the QR code is really rather simple:
1
2
3
4
5
6
7
8
9
104
10
11
12
13
14
The QR filter requires an NSData object which it will encode, and hence we first take the NSString and encode
it into an NSData object using UTF-8 encoding.
Then, same as we did before, we create a CIFilter using the filterWithName: factory method, specifying
the name to be CIQRCodeGenerator. The two keys we need to set in this case are called inputMessage, which
is the NSData object we just created, and inputCorrectionLevel, which specifies how resilient to error the
code will be. There are 4 levels:
L 7% error resilience
M 15% error resilience
Q 25% error resilience
H 30% error resilience
Once weve done this we can return the outputImage of the filter, which will be a CIImage with 1pt resolution
for the smallest squares.
We want to be able to resize this image, but we dont want to allow any interpolation since what we have is
pixel-perfect. In order to do this we create a new method which enables rescaling an image with interpolation
disabled:
1
2
3
4
5
6
8
9
10
11
12
13
14
15
16
17
18
19
20
21
105
Like we did in the previous example, we first create a CGImage representation of the CIImage. Then we
create a core graphics context of the correctly rescaled resolution. The important line here is that we set
the interpolation quality to none. If we were rescaling a photo, this would look pretty terrible and pixelized,
but pixelized is exactly what we want for a QR code:
1
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
Once weve drawn the image into the context then we can grab it out as a UIImage and return it. Thus, our
completed generation handler looks like this:
1
2
3
4
- (IBAction)handleGenerateButtonPressed:(id)sender {
// Disable the UI
[self setUIElementsAsEnabled:NO];
[self.stringTextField resignFirstResponder];
6
7
8
9
10
11
// Convert to an UIImage
UIImage *qrCodeImg = [self createNonInterpolatedUIImageFromCIImage:qrCode
withScale:2*[[UIScreen mainScreen] scale]];
12
13
14
15
16
17
18
// Re-enable the UI
[self setUIElementsAsEnabled:YES];
19
20
21
- (void)setUIElementsAsEnabled:(BOOL)enabled
{
self.generateButton.enabled = enabled;
self.stringTextField.enabled = enabled;
}
If you run the app up now youll be able to generate QR codes all day and night. No idea what youre going
to do with them maybe soon well work out a way to read them.
106
QR Generator
Conclusion
CoreImage is a handy framework for doing some fairly advanced image processing without having to get too
involved with the low-level image manipulation. It has its quirks, but it can be really useful. With the new
photo-effect filters and QR code generator it might just have saved you finding an external dependency or
writing your own versions.
AVFoundation pipeline
AVFoundation is a large framework which facilitates creating, editing, display and capture of multimedia. This
post isnt meant to be an introduction to AVFoundation, but well cover the basics of getting a live feed from
the camera to appear on the screen, since its this well use to extract QR codes. In order to use AVFoundation
we need to import the framework:
1
@import AVFoundation;
When capturing media, we use the AVCaptureSession class as the core of our pipeline. We then need to add
inputs and outputs to complete the session. Well set this up in the viewDidLoad method of our view controller.
Firstly, create a session:
1
We need to add the main camera as an input to this session. An input is a AVCaptureDeviceInput object,
which is created from a AVCaptureDevice object:
1
2
3
4
5
6
7
8
9
if(input) {
// Add the input to the session
[session addInput:input];
10
11
12
13
108
} else {
NSLog(@"error: %@", error);
return;
}
Here we get a reference to the default video input device, which will be the rear camera on devices with
multiple cameras. Then we create an AVCaptureDeviceInput object using the device, and then add it to the
session.
In order to get the video to appear on the screen we need to create a AVCaptureVideoPreviewLayer. This is a
CALayer subclass, which, when added to a session will display the current video output of the session. Given
that we have an ivar called _previewLayer of type AVCaptureVideoPreviewLayer:
1
2
3
4
5
6
The videoGravity property is used to specify how the video should appear within the bounds of the layer.
Since the aspect-ratio of the video is not equal to that of the screen, we want to chop off the edges of the video
so that it appears to fill the entire screen, hence the use of AVLayerVideoGravityResizeAspectFill. We add
this layer as a sublayer of the views layer.
Now this is set up, all that remains is to start the session:
1
2
If you run the app up now (on a device) then youll be able to see the cameras output on the screen - magic.
109
Preview Layer
Capturing metadata
Youve been able to do what weve achieved so far since iOS5, but in this section were going to do some stuff
which has only been possible since iOS7.
An AVCaptureSession can have AVCaptureOutput objects attached to it, forming the end points of the AV
pipeline. The AVCaptureOutput subclass were interested in here is AVCaptureMetadataOutput, which detects
any metadata from the video content and outputs it. The output of this class isnt of the form of image or
video, but instead metadata objects which have been extracted from the video feed itself. Setting this up is as
follows:
1
2
3
4
5
110
Here, weve created a metadata output object, and added it as an output to the session. Then weve using a
method provided to log out a list of the different metadata types we can register to be informed about:
1
2
3
4
5
6
7
8
9
10
11
12
Its important to note that we have to add our metadata output object to the session before attempting this,
since the available types depend on the input device. We can see above that we can register to detect QR
codes, so lets do that:
1
2
This is an array, so you can specify as many of the different metadata types as you wish.
When the metadata output object finds something within the video stream for which it can generate metadata
it tells its delegate, so we need to set the delegate:
1
2
Since AVFoundation is designed to allow threaded operation, we specify which queue we want the delegate
to be called on as well.
The delegate protocol method we need to adopt is AVCaptureMetadataOutputObjectsDelegate:
1
2
3
4
5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
111
The metadataObjects array consists of AVMetadataObject objects, which we inspect to find their type. Since
weve only registered to be notified of QR codes well be sent objects of type AVMetadataObjectTypeQRCode.
The AVMetadataMachineReadableCodeObject type has a stringValue property which contains the decoded
value of whatever metadata object has been detected. Here were pushing this string to be displayed in the
_decodedMessage label, which was created in viewDidLoad:
1
2
3
4
5
6
7
8
9
Running the app up now and pointing it at a QR code will cause the decoded string to appear at the bottom
of the screen:
112
Decoding
2
3
4
5
@end
The corners array contains (boxed) CGPoint objects, each of which represents a corner of the shape we wish
to draw.
Were going to use a CAShapeLayer to draw the points, as this is an extremely efficient way of drawing shapes:
1
2
3
4
113
@interface SCShapeView () {
CAShapeLayer *_outline;
}
@end
5
6
@implementation SCShapeView
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
_outline = [CAShapeLayer new];
_outline.strokeColor =
[[[UIColor blueColor] colorWithAlphaComponent:0.8] CGColor];
_outline.lineWidth = 2.0;
_outline.fillColor = [[UIColor clearColor] CGColor];
[self.layer addSublayer:_outline];
}
return self;
}
@end
Here we create a shape layer, set some appearance properties on it, and add it to the layer hierarchy. We are
yet to set the path of the shape - well do that in the setter for the corners property:
1
2
3
4
5
6
7
- (void)setCorners:(NSArray *)corners
{
if(corners != _corners) {
_corners = corners;
_outline.path = [[self createPathFromPoints:corners] CGPath];
}
}
This means that as the corners property is updated, the shape will be redrawn in its new position. Weve
used a utility method to create a UIBezierPath from an NSArray of boxed CGPoint objects:
1
2
3
4
5
114
7
8
9
10
11
12
13
14
return path;
15
16
This is fairly self-explanatory - just using the API of UIBezierPath to create a completed shape.
Now weve created this shape view, we need to use it in our view controller to show the detected QR code.
Lets create an ivar, and create the object in viewDidLoad:
1
2
3
4
Now we need to update this view in the metadata output delegate method:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
115
AVFoundation uses a different coordinate system to that used by UIKit when rendering on the screen, so
the first part of this code snippet uses the transformedMetadataObjectForMetadataObject: method on
AVCaptureVideoPreviewLayer to translate the coordinate system from AVFoundation, to be in the coordinate
system of our preview layer.
Next we set the frame of our shape overlay to be the same as the bounding box of the detected code, and
update its visibility.
We now need to set the corners property on the shape view so that the overlay is positioned correctly, but
before we do that we need to change coordinate systems again.
The corners property on AVMetadataMachineReadableCodeObject is an NSArray of dictionary objects, each
of which have X and Y keys. Since we translated the coordinate systems, the values associated with the corners
refer to the video preview layer - but we want them to be in terms of our shape overlay. Therefore we use the
following utility method:
1
2
3
4
5
7
8
9
10
11
12
13
14
15
16
17
18
19
Here we use convertPoint:toView: from UIView to change coordinate systems, and return an NSArray
containing NSValue boxed CGPoint objects instead of NSDictionary objects. We can then pass this to the
corners property of our shape view.
If you run the app up now youll see the bounding box of the code highlighted as well as the decoded message:
116
The final bits of code in the example app cause the decoded message and bounding box to disappear after a
certain amount of time. This prevents the box from staying on the screen when there are no QR codes present.
1
2
3
4
5
6
- (void)startOverlayHideTimer
{
// Cancel it if we're already running
if(_boxHideTimer) {
[_boxHideTimer invalidate];
}
8
9
10
11
12
13
14
Each time this method gets called it resets the timer, which when it finally gets fired will call the following
117
method:
1
2
3
4
5
6
- (void)removeBoundingBox:(id)sender
{
// Hide the box and remove the decoded text
_boundingBox.hidden = YES;
_decodedMessage.text = @"";
}
Conclusion
AVFoundation is a very complex and powerful framework, and in iOS7 it just got better. Detecting different
barcodes live used to be quite a difficult task on mobile devices, but with introductions of these new metadata
output types it is now really simple and efficient. Whether or not we should be using QR code is a different
question but at least its easy if we want to =)
Create a beacon
To make an app act like an iBeacon we use CoreLocation to create the beacon properties, and then ask
CoreBluetooth to broadcast them appropriately.
iBeacons have several properties used to identify it uniquely.
proximityUUID. This is a NSUUID object which identifies your companys beacons. You can have may
beacons with the same uuid, and set CoreLocation to notify you whenever one comes into range.
major. An NSNumber representing the major ID of this particular beacon. This could identify a particular
store, or floor within a store. The number is represented as a 16-bit unsigned integer.
minor. Another NSNumber which represents the individual beacon.
Its possible to set CoreLocation to notify at any of the 3 possible granularities of iBeacon ID - i.e. notify
whenever any iBeacon with the same UUID is in range, or with the same UUID and major ID, or require a
specific beacon - with uuid, major and minor ids all matching.
We need to include both CoreLocation and CoreBluetooth for this project:
1
2
@import CoreBluetooth;
@import CoreLocation;
In order to make an app appear as a beacon, we create a CLBeaconRegion object, specifying IDs we require.
In our case we will only set the UUID:
1
2
3
119
A CBPeripheralManager has to have a delegate set (even though we wont be using it in this example), and it
has a required method:
1
2
3
4
5
Now, when we want to start broadcasting as an iBeacon then we get hold of a dictionary of settings from the
CLBeaconRegion and pass it to the CBPeripheralManager to begin broadcast:
1
2
3
4
5
- (IBAction)handleHidingButtonPressed:(id)sender {
if(_cbPeripheralManager.state < CBPeripheralManagerStatePoweredOn) {
NSLog(@"Bluetooth must be enabled in order to act as an iBeacon");
return;
}
7
8
9
[_cbPeripheralManager startAdvertising:toBroadcast];
10
11
120
Firstly we check that the peripheral manager is ready to go, before constructing the settings to broadcast, and
then beginning to advertise the details. The measuredPower argument specifies the power in dBs observed at
a distance of 1m from the transmitter.
Hiding
We can stop the iBeacon by calling the stopAdvertising method on the CBPeripheralManager object.
Beacon Ranging
Using CoreLocation, we can request alerts when an iBeacon with a particular ID comes into range, or get
regular updates as to the approximate range of all local beacons. In our HotOrCold game we are going to
request range updates for the beacon we created above.
We need to create a CoreLocation CLLocationManager:
1
2
121
Notice that were setting the delegate as well, and well implement the following delegate method:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
- (void)locationManager:(CLLocationManager *)manager
didRangeBeacons:(NSArray *)beacons
inRegion:(CLBeaconRegion *)region
{
if([region isEqual:_rangedRegion]) {
// Let's just take the first beacon
CLBeacon *beacon = [beacons firstObject];
self.statusLabel.textColor = [UIColor whiteColor];
self.signalStrengthLabel.textColor = [UIColor whiteColor];
self.signalStrengthLabel.text = [NSString stringWithFormat:@"%ddB", beacon.rssi];
switch (beacon.proximity) {
case CLProximityUnknown:
self.view.backgroundColor = [UIColor blueColor];
[self setStatus:@"Freezing!"];
break;
16
case CLProximityFar:
self.view.backgroundColor = [UIColor blueColor];
[self setStatus:@"Cold!"];
break;
17
18
19
20
21
case CLProximityImmediate:
self.view.backgroundColor = [UIColor purpleColor];
[self setStatus:@"Warmer"];
break;
22
23
24
25
26
case CLProximityNear:
self.view.backgroundColor = [UIColor redColor];
[self setStatus:@"HOT!"];
break;
27
28
29
30
31
default:
break;
32
33
34
35
36
This delegate method responds to ranging updates from beacons (well register to receive these in a moment).
The delegate method gets called at a frequency of 1 Hz, and is provided with an array of beacons. A CLBeacon
has properties which determine its identity, and also the approximate range of the beacon. Were using this
to set the background color of the view and update the status label using the following utility method:
1
2
3
4
5
122
- (void)setStatus:(NSString *)status
{
self.statusLabel.hidden = NO;
self.statusLabel.text = status;
}
In order for this delegate method to be called, we need ask the location manager to start ranging for a
particular beacon:
1
[_clLocationManager startRangingBeaconsInRegion:_rangedRegion];
[_clLocationManager stopRangingBeaconsInRegion:_rangedRegion];
If you run up this app on 2 devices (both of which have Bluetooth LE) and set one to hide and one to seek you
can play HotOrCold yourself:
Conclusion
iBeacons offer fantastic potential - they could even be one of the most disruptive new features of iOS7. I think
they are both Apples answer to, and the final nail in the coffin, of NFC on mobile devices. Hopefully not
only will our phones soon have the correct information available to us as we arrive at a service desk, but we
might also start to see indoor navigation. I encourage you to take a look at the iBeacon API - its not very
complicated, and I look forward to seeing your innovative uses!
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputMetadataObjects:(NSArray *)metadataObjects
fromConnection:(AVCaptureConnection *)connection
{
for(AVMetadataObject *metadataObject in metadataObjects) {
if([metadataObject.type isEqualToString:AVMetadataObjectTypeFace]) {
// Take an image of the face and pass to CoreImage for detection
AVCaptureConnection *stillConnection =
[_stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[_stillImageOutput
captureStillImageAsynchronouslyFromConnection:stillConnection
124
completionHandler:
^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if(error) {
NSLog(@"There was a problem");
return;
}
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
}];
41
42
43
44
This is fairly similar to what we did with QR codes, only now we have added a new output type to the session
- AVCaptureStillImageOutput. This allows us to take a photo of the input at a given moment - which is
exactly what captureStillImageAsynchronouslyFromConnection:completionHandler: does. So, when we
are notified that AVFoundation has detected a face, we take a still image of the current input, and stop the
session.
We create a JPEG representation of the captured image with the following:
1
2
125
Now we pop this into an UIImageView, and create a CIImage version as well, in preparation for the CoreImage
facial feature detection. Well take a look at this imageContainsSmiles:callback: method next.
if(!_ciContext) {
_ciContext = [CIContext contextWithOptions:nil];
}
4
5
6
7
8
9
if(!_faceDetector) {
_faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace
context:_ciContext
options:nil];
}
To get the detector to perform its search, we invoke the featuresInImage:options: method:
1
2
3
4
In order to get the detector to perform smile and blink detection we have to specify as such in the detector
options (CIDetectorEyeBlink and CIDetectorSmile). The CoreImage face detector is orientation specific, and
therefore were also setting the detector orientation here to match the orientation in which the app has been
designed.
Now we can loop through the features array (which contains CIFaceFeature objects) and interrogate each
one to find out whether it contains a smile or blinking eyes:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
126
dispatch_async(dispatch_get_main_queue(), ^{
callback(happyPicture);
});
Our callback method updates the label to describe whether or not a good photo was taken:
1
2
3
4
5
6
7
8
9
If you run the app up you can see how good the CoreImage facial feature detector is:
In addition to these properties, its also possible to find the positions of the different facial features, such as
the eyes and the mouth.
127
Conclusion
Although not a ground-breaking addition to the API, this advance in the CoreImage facial detector adds a
nice ability to interrogate your facial images. It could make a nice addition to a photography app - helping
users take all the selfies they need.
Without estimation
We create a simple UITableView with a UITableViewController, containing just 1 section with 200 rows. The
cells contain their index and their height, which varies on a row-by-row basis. This is important - if all the
rows are the same height then we dont need to implement the heightForRowAtIndexPath: method on the
delegate, and we wont get any improvement out of using the new row height estimation method.
1
2
3
4
5
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView
{
// Return the number of sections.
return 1;
}
6
7
8
9
10
11
12
- (NSInteger)tableView:(UITableView *)tableView
numberOfRowsInSection:(NSInteger)section
{
// Return the number of rows in the section.
return 200;
}
13
14
15
16
17
18
19
20
21
22
23
24
25
26
129
The heightForRowAtIndex: method is a utility method which will return the height of a given row:
1
2
3
4
5
6
7
8
9
- (CGFloat)heightForRowAtIndex:(NSUInteger)index
{
CGFloat result;
for (NSInteger i=0; i < 1e5; i++) {
result = sqrt((double)i);
}
result = (index % 3 + 1) * 20.0;
return result;
}
If we had a complex table with cells of differing heights, it is likely that we would have to construct the cell
to be able to determine its height, which takes a long time. To simulate this weve put a superfluous loop
calculation in the height calculation method - it isnt of any use, but takes some computational time.
We also need a delegate to return the row heights as we go, so we create SCNonEstimatingTableViewDelegate:
1
2
3
This has a constructor which takes a block which is used to calculate the row height of a given row:
1
2
3
4
@implementation SCNonEstimatingTableViewDelegate
{
CGFloat (^_heightBlock)(NSUInteger index);
}
5
6
7
8
9
10
11
12
13
14
- (instancetype)initWithHeightBlock:(CGFloat (^)(NSUInteger))heightBlock
{
self = [super init];
if(self) {
_heightBlock = [heightBlock copy];
}
return self;
}
@end
1
2
3
4
5
6
7
130
This logs that it has been called and uses the block to calculate the row height for the specified index path.
With a bit of wiring up in the view controller then were done:
1
2
3
- (void)viewDidLoad
{
[super viewDidLoad];
5
6
7
8
9
10
Running the app up now will demonstrate the variable row height table:
131
TableView
Looking at the log messages we can see that the row height method gets called for every single row in the
table before we first render the table. This is because the table view needs to know its total height (for drawing
the scroll bar etc). This can present a problem in complex table views, where calculating the height of a row
is a complex operation - it might involve fetching the content, or rendering the cell to discover how much
space is required. Its not always an easy operation. Our heightForRowAtIndex: utility method simulates this
complexity with a long loop of calculations. Adding a bit of timing logic we can see that in this contrived
example (and running on a simulator) we have a delay of nearly half a second from loading the tableview, to
it appearing:
Without estimation
132
With estimation
The new height estimation delegate methods provide a way to improve this initial delay to rendering the
table. If we implement tableView:estimatedHeightForRowAtIndexPath: in addition to the aforementioned
tableView:heightForRowAtIndexPath: then rather than calling the height method for every row before
rendering the tableview, the estimatedHeight method will be called for every row, and the height method
just for rows which are being rendered on the screen. Therefore, we have separated the height calculation into
a method which requires the exact height (since the cell is about to appear on screen), and a method which is
just used to calculate the height of the entire tableview (hence doesnt need to be perfectly accurate).
To demonstrate this in action we create a new delegate which will implement the height estimation method:
1
2
3
4
Here weve got a constructor with 2 blocks, one will be used for the exact height method, and one for the
estimation:
1
2
3
@implementation SCEstimatingTableViewDelegate {
CGFloat (^_estimationBlock)(NSUInteger index);
}
4
5
6
7
8
9
10
11
12
13
14
133
1
2
3
4
5
6
7
Updating the view controller with a much cheaper height estimation method - just returning the average
height for our cells (40.0).
1
2
3
- (void)viewDidLoad
{
[super viewDidLoad];
if(self.enableEstimation) {
_delegate = [[SCEstimatingTableViewDelegate alloc]initWithHeightBlock:
^CGFloat(NSUInteger index) {
return [self heightForRowAtIndex:index];
} estimationBlock:^CGFloat(NSUInteger index) {
return 40.0;
}];
} else {
_delegate = [[SCNonEstimatingTableViewDelegate alloc] initWithHeightBlock:
^CGFloat(NSUInteger index) {
return [self heightForRowAtIndex:index];
}];
}
self.tableView.delegate = _delegate;
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Running the app up now and observing the log and well see that the height method no longer gets called for
every cell before initial render, but instead the estimated height method. The height method is called just for
the cells which are being rendered on the screen. Consequently see that the load time has dropped to a fifth
of a second:
With Estimation
Conclusion
As was mentioned before, this example is a little contrived, but it does demonstrate rather well that if
calculating the actual height is hard work then implementing the new estimation height method can really
134
improve the responsiveness of your app, particularly if you have a large tableview. There are additional height
estimation methods for section headers and footers which work in precisely the same manner. It might not be
a groundbreaking API change, but in some cases it can really improve the user experience, so its definitely
worth doing.
In Practice
Reading through the property descriptions above might make you think that its all very easy, and in my
experience it is. In some cases. Otherwise its just confusing.
136
Here we need to set the edgesForExtendedLayout correctly, otherwise your view will appear underneath the
bar. This can be set in interface builder as follows:
Interface Builder
Or in code with:
1
self.edgesForExtendedLayout = UIRectEdgeNone;
137
138
Other cases
If you run up the accompanying sample app for todays post then youll notice that there are some other
examples provided - namely scrollview inside a tab controller, and a tableview inside a tab controller. For
some reason (I think it is a bug, but would love to be corrected), the scroll view insets are no longer adjusted
as they were inside the navigation controller:
139
Conclusion
The fact that all view controllers are now full screen has foxed a lot of developers, and with good reason. The
documentation around them isnt great, and I think there might be a bug in the scroll view inset adjustment
for tab bar controllers. However, it is worth playing around with - the concept of multiple layers is integral
to the new iOS7 look and feel, and when it works it does look rather good.
TextKit
TextKit is a massive framework, and this post isnt going to attempt to explain it in great detail at all. In order
to understand the multi-column project there are 4 classes to be familiar with:
NSTextStorage: A subclass of NSAttributedString and contains both the content and formatting markup for the text we wish to render. It enabled editing and keeps references to relevant layout managers
to inform them of changes in the underlying text store.
NSLayoutManager: Responsible for managing the rendering the text from a store in one or multiple text
container objects. Converts the underlying unicode characters into glyphs which can be rendered on
screen. Can have multiple text containers to allow flowing of the text between different regions.
NSTextContainer: Defines the region in which the text will be rendered. This is provided with glyphs
from the layout manager and fills the area it specifies. Can use UIBezierPath objects as exclusion zones.
UITextView: Actually render the text on screen. It has been updated for iOS7 with the addition of a
constructor which takes an NSTextContainer.
We are going to use all of these classes to create a multi-column text view. For far more information about
the TextKit architecture and how to use it then take a look at the TextKit Tutorial from our very own Colin
Eberhardt.
Multiple Columns
Were going to put all the code into a view controller, so need some ivars to keep hold of the text store and
the layout manager:
http://www.raywenderlich.com/50151/text-kit-tutorial
https://twitter.com/colineberhardt
1
2
3
4
5
141
@interface SCViewController () {
NSLayoutManager *_layoutManager;
NSTextStorage *_textStorage;
}
@end
Well create these in viewDidLoad, firstly lets look at the text storage. Weve got a .txt file as part of the bundle, which contains some plain-text Lorem Ipsum. Since NSTextStorage is a subclass of NSAttributedString
we can use the initWithFileURL:options:documentAttributes:error constructor:
1
2
3
4
5
6
7
4
5
6
Once weve created the _layoutManager we add it to the _textStorage. This not only provides the text content
to the layout manager, but will also ensure that if the underlying content changes the layout manager will be
informed appropriately.
At the end of viewDidLoad were calling layoutTextContainers which is a utility method well take a look
at now.
We are going to loop through each of the columns, creating a new NSTextContainer, to specify the dimensions
of the text, and a UITextView to render it on the screen. The loop looks like this:
1
2
3
4
5
142
NSUInteger lastRenderedGlyph = 0;
CGFloat currentXOffset = 0;
while (lastRenderedGlyph < _layoutManager.numberOfGlyphs) {
...
}
6
7
8
9
We set up a couple of variables - one which will allow the loop to end (lastRenderedGlyph), and one to store
the x-offset of the current column. NSLayoutManager has a property which contains the total number of glyphs
which it is responsible for, so were going to loop through until weve drawn all the glyphs we have.
After the loop has completed were going to work out the correct size of the content weve created, and set it
on the scrollview, so that we can move between the pages as expected.
Inside the loop, the first thing we need to do is work out the dimensions of the current column:
1
2
3
4
5
Were setting the column to be the full height of the view, and half the width.
Now we can create an NSTextContainer to layout the glyphs within the column area we have specified:
1
2
We also add the text container to the layout manager. This ensures that the container is provided with a
sequence of glyphs to render.
In order to get the container to render on the screen, we have to create a UITextView:
1
2
3
4
5
Here were specifying the textContainer the text view is going to represent - using the newly introduced
initWithFrame:textContainer: method.
Finally we need to update our local variables for tracking the last rendered glyph and current column position:
1
2
143
3
4
5
For those of you who have tried to create text columns in iOS before, youll be amazed to hear that were
done! If you run the app up now youll see the Lorem Ipsum content nicely laid out in columns half the
screen width, and with swiping enabled to move between pages:
Conclusion
TextKit is a major addition to iOS and represents some extremely powerful functionality. Weve taken a look
today at how easy it is to put text into columns, and this barely scratches the surface of what is available. I
encourage you to investigate TextKit further if you are displaying any more than small amounts of text - its
actually one of the new areas of iOS7 with pretty good documentation.
The first line, we create an NSDictionary of descriptor attributes - here just specifying that were only
interested in fonts which are downloadable. Then we create a CTFontDescriptorRef using this dictionary
- note here that we cast the NSDictionary to a CFDictionaryRef - making use of toll-free bridging. Finally we
call the method which will provide us with a list of fonts descriptors which match this descriptor we provided
- i.e. a list of descriptors which represent downloadable fonts.
The call to this last method is blocking, and may require a network call, so were going to wrap this
functionality up in a requestDownloadableFontList method:
1
2
3
4
5
6
7
8
145
- (void)requestDownloadableFontList
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
NSDictionary *descriptorOptions = @{(id)kCTFontDownloadableAttribute : @YES};
CTFontDescriptorRef descriptor =
CTFontDescriptorCreateWithAttributes((CFDictionaryRef)descriptorOptions);
CFArrayRef fontDescriptors =
CTFontDescriptorCreateMatchingFontDescriptors(descriptor, NULL);
dispatch_async(dispatch_get_main_queue(), ^{
[self fontListDownloadComplete:(NSArray *)CFBridgingRelease(fontDescriptors)];
});
10
11
12
13
14
15
});
16
17
- (void)fontListDownloadComplete:(NSArray *)fontList
{
// Need to reorganise array into dictionary
NSMutableDictionary *fontFamilies = [NSMutableDictionary new];
for(UIFontDescriptor *descriptor in fontList) {
NSString *fontFamilyName = [descriptor
objectForKey:UIFontDescriptorFamilyAttribute];
NSMutableArray *fontDescriptors = [fontFamilies objectForKey:fontFamilyName];
if(!fontDescriptors) {
fontDescriptors = [NSMutableArray new];
[fontFamilies setObject:fontDescriptors forKey:fontFamilyName];
}
146
13
[fontDescriptors addObject:descriptor];
14
15
16
17
18
[self.tableView reloadData];
19
20
Here we are simply re-organising the array of font descriptors into a dictionary, arranged by font family.
Were making use here of the fact that UIFontDescriptor is toll-free bridged with CTFontDescriptorRef.
Once we have arranged the data correctly, we can reload the table. With the tableview datasource methods
set appropriately, and viewDidLoad:
1
2
3
4
5
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
self.title = @"Families";
[self requestDownloadableFontList];
7
8
we can run the app up and see that the first page of the navigation controller will look like this.
147
The next level of the navigation controller displays the fonts within a specific family, so to do that we create
an NSArray property which contains a list of font descriptors. We set this in the prepareForSegue: method
of the first view controller:
1
2
3
4
5
6
7
8
9
10
11
With appropriate datasource methods, the second level of the drill-down will look like this:
148
Downloading a font
The final stage of the app will display what the font looks like with some sample glyphs, if the font is available.
Otherwise the user will have the opportunity to download the font.
The download process is completely within the handleDownloadPressed: method, and the function were
interested in is CTFontDescriptorMatchFontDescriptorsWithProgressHandler. This takes a CFArrayRef of
font descriptors and downloads the font if required. It takes a block as a parameter which provides updates
of the user. This method returns immediately, and the operation is performed on a background queue.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
149
- (IBAction)handleDownloadPressed:(id)sender {
self.downloadProgressBar.hidden = NO;
CTFontDescriptorMatchFontDescriptorsWithProgressHandler(
(CFArrayRef)@[_fontDescriptor],
NULL,
^bool(CTFontDescriptorMatchingState state, CFDictionaryRef progressParameter) {
double progressValue = [[(__bridge NSDictionary *)progressParameter
objectForKey:(id)kCTFontDescriptorMatchingPercentage] doubleValue];
if (state == kCTFontDescriptorMatchingDidFinish) {
dispatch_async(dispatch_get_main_queue(), ^{
self.downloadProgressBar.hidden = YES;
[self updateView];
});
} else {
dispatch_async(dispatch_get_main_queue(), ^{
self.downloadProgressBar.progress = progressValue;
});
}
return (bool)YES;
}
);
}
In the progress block, we extract the current progress percentage from the provided dictionary, and update
the progress bar as appropriate. If the state parameter suggests that the download has been completed, we
call updateView, which is a method we have created to apply the font to the sample glyphs. Note that we have
to ensure that the UI updates are performed on the main thread, as we usually do:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
- (void)updateView
{
NSString *fontName = [self.fontDescriptor objectForKey:UIFontDescriptorNameAttribute];
self.title = fontName;
UIFont *font = [UIFont fontWithName:fontName size:26.f];
if(font && [font.fontName isEqualToString:fontName]) {
self.sampleTextLabel.font = font;
self.downloadButton.enabled = NO;
self.detailDescriptionLabel.text = @"Font available";
} else {
self.sampleTextLabel.font = [UIFont systemFontOfSize:font.pointSize];
self.downloadButton.enabled = YES;
self.detailDescriptionLabel.text = @"This font is not yet downloaded";
}
}
Running the app up now will allow us to browse through the list of available fonts from Apple, and download
each of them to try them out.
150
151
Conclusion
Downloadable fonts are a handy feature which will allow you to customize the appearance of your app
without having to license a font and bundle it with your app. However, its important to ensure that you
handle the case where the user doesnt have network connectivity - what should the fall-back font be, and
does the UI work with both options.
Here were allowing the user to enter a name which will be used to identify their device to users they attempt
to collect to.
The MCSession object is used to coordinate sending data between peers within that session. We firstly create
one and then add peers to it:
1
2
153
monitoring as peers change state (e.g. disconnect), along with methods which are called when a peer in the
network initiates a data transfer.
In order to add peers to the session there is a ViewController subclass which presents a list of local devices
to the user and allows them to select which they would like to establish a connection with. We create one of
these and then present it as a modal view controller:
1
2
3
4
The serviceType argument is a string which represents the service were trying to connect to. This string can
comprise of lowercase characters, numbers and hyphens, and should be of a bonjour-like domain.
Again we assign self to the delegate property - this time adopting the MCBrowserViewControllerDelegate
protocol. There are two methods we need to implement - for completion and cancellation of the browser view
controller. Here were going to dismiss the browser and enable a button if we were successful:
1
2
3
4
5
6
7
8
9
10
11
12
- (void)browserViewControllerDidFinish:(MCBrowserViewController *)browserViewController
{
[browserViewController dismissViewControllerAnimated:YES completion:^{
self.takePhotoButton.enabled = YES;
}];
}
If we run the app up at this point well be able to input a peer name, and then bring up the browser to
search for other devices. At this stage we dont havent implemented the advertising functionality for other
devices, so we cant connect to anything. Well implement this in the next section, the pictures below show
the connection process if we do have a device to connect to, and the connection is accepted:
154
Advertising availability
Advertising availability is made possible through the MCAdvertiserAssistant class, which is responsible both
for managing the network layer, and also presenting an alert to the user to allow them to accept or reject an
incoming connection.
In the same way that we needed a session and peer ID to browse, we need them for advertising, so again we
allow the user to specify a string to be used as a peer name:
1
2
3
4
5
6
7
Were using the same string for the serviceType parameter as we did within the browser - this will enable
the connections to be matched appropriately.
Finally we need to start advertising our availability:
1
[_advertiserAssistant start];
155
If we now fire up the browser on one device, and the advertiser on another then they should be able to find
each other. When the device appears in the browser, and the user taps on it, then the user with the advertising
device will be presented with an alert allowing them to choose whether or not to make the connection:
permission
Sending Data
There are 3 ways in which data can be transferred over the multipeer network weve established - an NSData
object, an NSStream or sending a file-based resource. All three of these share a common paradigm - the
MCSession object has methods to initiate each of these transfers, and then the session at the receiving end will
call the appropriate delegate method.
For example, were going to take a photo with one device and then have it automagically appear on the screen
of the other device. Well use the NSData approach for this example, but the methodology is very similar for
each of them.
We use UIImagePickerController to take a simple photo
1
2
3
4
156
And implement the following delegate method to get the photo out as expected:
1
2
3
4
5
6
7
8
9
10
11
12
- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *photo = info[UIImagePickerControllerOriginalImage];
UIImage *smallerPhoto = [self rescaleImage:photo toSize:CGSizeMake(800, 600)];
NSData *jpeg = UIImageJPEGRepresentation(smallerPhoto, 0.2);
[self dismissViewControllerAnimated:YES completion:^{
NSError *error = nil;
[_session sendData:jpeg toPeers:[_session connectedPeers]
withMode:MCSessionSendDataReliable error:&error];
}];
}
The line of interest here is the call to sendData:toPeers:withMode:error: on the MCSession object. This can
take an NSData object and send it to other peers in the network. Here were selecting to send it to all the peers
in the network. The mode allows you to select whether or not you want the data transferred reliably or not.
If you select reliable then the messages will definitely arrive and will be in the correct order, but will have a
higher time overhead. Using the unreliable mode means that some messages may be lost, but the delay will
be much smaller.
To receive the data on the other device we just provide an appropriate implementation for the correct delegate
method:
1
2
3
4
5
6
7
8
- (void)session:(MCSession *)session
didReceiveData:(NSData *)data
fromPeer:(MCPeerID *)peerID
{
UIImage *image = [UIImage imageWithData:data];
self.imageView.image = image;
self.imageView.contentScaleFactor = UIViewContentModeScaleAspectFill;
}
Here were simply creating a UIImage from the NSData object, and then setting it as the image for on a
UIImageView. The following pictures show the photo being taken on one device, and then displayed on another:
157
The streaming and resource APIs work in much the same way, although the resource API provides
asynchronous progress updates, and is hence more suitable for large data transfers.
Conclusion
The MultipeerConnectivity framework is incredibly powerful, and Apple-like in its concept of abstracting the
fiddly technical details away from the developer. Its pretty obvious that the new AirDrop functionality which
appeared in iOS7 is built on top of this framework, and thats very much the tip of the iceberg in terms of what
could be built using this framework. Imagine an iBeacon which, when youre near it, not only notifies you of
the fact, but then sends you information without using the internet. Maybe you could have multi-angle video
streamed to your device at a sports event, but only if youre in the venue? I cant wait to see what people
build!
Afterword
24 days worth of new features is pretty impressive. And this list is by no means exhaustive. Weve covered a
lot of ground, and I hope that youve learnt something along the way.
If you have any feedback about the book or its content then Id love to hear it - hit me up on twitter at
@iwantmyrealname, or email me sdavies@shinobicontrols.com.
The day-by-day format is a lot of fun to create, and has hopefully been useful to you. I might well consider
producing similar blog series on different topics in the future - any suggestions or comments will be greatly
appreciated.
Useful Links
Ive compiled a few useful links of interest, for further reading:
shinobicontrols.com/blog - ShinobiContols blog - to keep up to date on ShinobiControls products, and
other technical series such as this one.
iwantmyreal.name - My personal blog
raywenderlich.com - Excellent resource for learning iOS, including the new book iOS 7 by tutorials
Whats new in iOS7 - Apple documentation for the new features introduced in iOS7.
https://twitter.com/iwantmyrealname
mailto:sdavies@shinobicontrols.com
http://www.shinobicontrols.com/blog
http://iwantmyreal.name/
http://www.raywenderlich.com/
https://developer.apple.com/library/ios/releasenotes/General/WhatsNewIniOS/Articles/iOS7.html#//apple_ref/doc/uid/TP40013162-SW1