INTRODUCTION OVERVIEW Facepass enables you to communicate with anybody and everybody around you by just capturing their face with the camera and leaving comments anonymously because well

INTRODUCTION
OVERVIEW
Facepass enables you to communicate with anybody and everybody around you by just capturing their face with the camera and leaving comments anonymously because well, people are unbiased only when you give them the security to do so. Users have the privilege to choose whether or not to display their name once some stranger captures their face. The application uses iOS’s core image API to detect the face, Python script with the OpenCV library for getting the facial remarks with ease deployed to Heroku and Firebase as the data storage solution. The python script employs the LBP (Local Binary Pattern) algorithm, which primarily converts images of faces that was scanned from the app, split into a 9 columns and/or rows and compares the pixel next to each other for forming a pattern. This pattern recognition and texture recognition algorithm namely LBP has been proved to be the most accurate with OpenCV. The application has been written for both- iOS and Android with the same backend/storage solution. This application is primarily targeted at shy individuals, teens and autistic individuals who are not immediately comfortable for talking with strangers. This project can help people overcome unnecessary shyness, fear and aid those who can’t express what they want to properly.

FACIAL RECOGNITION
In the early 1960s, an unnamed intelligence agency funded the first attempt at automation of facial recognition. Technology has improved, needs have changed and data collection has become significantly smarter since then, allowing facial recognition to have real-world everyday consequences, both positive and negative.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Traditionally a government-centric technology, facial recognition has become the talk of the airline industry, the banking industry, smartphone companies, the computer industry, and more. With accelerated improvements in processing power, facial recognition can be assessed in real time and without the consent of the individual.

What is the social impact to our privacy?
The perception was much different pre-September 11, 2001. This perceived type of futuristic technology was only something people saw in Hollywood and fell under the umbrella of Big Brother is watching us. At Super Bowl XXXV the federal government ran a test in which it scoured the 100,000 attendees and reported to have found 19 potential risks. This test was subsequently discovered by the media, leading to public conversation on privacy concerns.

When questioned about the secret test, Tampa police spokesman Joe Durkin expressed, “It confirmed our suspicions that these types of criminals would be coming to the Super Bowl to try and prey on the public.” The dilemma, which in my opinion was the result of 9/11, becomes a conversation about improved security and the impact on our personal privacy.

Nothing substantial came of this test other than Tampa exploring the use of facial recognition further for a year, with mixed results. Being able to run facial recognition in real time poses all sorts of complications: lighting, facial angles, covered faces, rainy weather. Their testing eventually fizzled out over the next few months.

Public conversation began to shift after 9/11 when fear of terrorism and prevention of said terrorism overshadows the invasion of privacy issue. Would you be willing to have your face scanned as you entered a supermarket, or a concert venue if there was a slight chance it could catch a potential threat? Is this living in fear or is this being intelligent about utilization of technology we have been developing since the 60s? What was once intended purely for government use can now make us safer and provide more convenience in our lives. While futuristic maybe, yes?—?but so was the iPod and the iPhone, so was an electric car, and a website to connect more that 1 billion people.

These technologies were met with initial adoption resistance, facial recognition has the potential to streamline parts of our life, make them more secure, provide us a greater level of convenience that range from withdrawing money at an ATM to entering your personalized home.

So will facial recognition become part of everyday life? I think this answer is far more complex than I could do justice here. People openly talk about how easy it is to open their iPhones with their thumbprints, willingly giving a public company their biometric information. Clear (Expedited Airport Security) reached their 1-million-member mark for improving your airport experience with biometrics. This technology is actively being tested all around the world and it will only keep improving.

So how strong is your privacy?
If you are worried about your privacy you would need to throw away your credit cards, dump your phone in a lake and not go out in public. Phones now utilize sensors and accelerometers to track our every behavior, understanding exactly when we wake up in the morning, where our offices are, where we shop for groceries, what our interests are and how we spend our time. This to me is the ultimate invasion of privacy, we willingly give up our personal information that these “free” services offer, then turn around and sell for profit, all for a split-second hit of dopamine when someone “likes” a picture we post on Facebook.

Facial recognition is a tool in a larger toolbox of solutions. As with any powerful technology, if it ends up in the wrong hands, it could be problematic. For now, I believe, it is here to stay as it improves the flow of people’s lives and has the potential to silently protect individuals. When we do not understand a shift of behavior and the positive impact it can have, we as a society, always want to resist. Education is the crux to this resistance and once society recognizes the overwhelming benefits offered as a result of facial recognition we will be able to move past the mental hurdles.

FACIAL RECOGNITION APPLICATIONS
You’re used to unlocking your door with a key, but maybe not with your face. As strange as it sounds, our physical appearances can now verify payments, grant access and improve existing security systems. Protecting physical and digital possessions is a universal concern which benefits everyone, unless you’re a cybercriminal or a kleptomaniac of course. Facial biometrics are gradually being applied to more industries, disrupting design, manufacturing, construction, law enforcement and healthcare. How is facial recognition software affecting these different sectors, and who are the companies and organizations behind its development.

Security
Companies are training deep learning algorithms to recognize fraud detection, reduce the need for traditional passwords, and to improve the ability to distinguish between a human face and a photograph.

Healthcare
Machine learning is being combined with computer vision to more accurately track patient medication consumption and support pain management procedures.

Marketing
Fraught with ethical considerations, marketing is a burgeoning domain of facial recognition innovation, and it’s one we can expect to see more of as facial recognition becomes ubiquitous.

INSTANT MESSAGING
Instant messaging is an internet service that allows people to communicate with each other in real time through an instant messaging software. Unlike e-mail, instant messaging allows messages from one person to appear right away on the other person’s computer screen right after the send button is pressed.

In the early 1990’s, instant messaging was often used only by users who wanted to talk or chat with their friends through the internet. But as the internet technology became more sophisticated, instant messaging became an integral tool for businesses.

Fig 1.1 INSTANT MESSAGING SYSTEMS
OBJECTIVE OF THE PROJECT
The main objective of the project is to help shy individuals talk to friends and strangers via instant messaging anonymously without the need to add or know the person’s credentials. We achieve this by introducing the easy-to-use, futuristic facial recognition system in the place of the legacy old login/registration process.

1.5 ORGANISATION OF PROJECT REPORT
The organization of the project is as follows. Chapter 1 gives the introduction of facial recognition in detail. In Chapter 2 the literature survey of the project is explained. The problem definition and methodologies of the project are explained in Chapter 3.The remaining chapters deal with design and implementation. The design of the proposed system and software tools of the project are explained in Chapter 4.Chapter 5 explains the implementation of the proposed system. In Chapter 6 the conclusion and future scope of the project are explained. The source code and snapshots of output are shown to demonstrate the performance of the proposed System.

LITERATURE SURVEY
AUTHOR: Mark Zuckerberg
YEAR: 2016
Mark Zuckerberg ‘s goal was to learn about the state of artificial intelligence — where we’re further along than people realize and where we’re still a long ways off. These challenges always lead him to learn more than he expected, and this one also gave him a better sense of all the internal technology Facebook engineers get to use, as well as a thorough overview of home automation.
So far this year, he has built a simple AI that he can talk to on my phone and computer, that can control his home, including lights, temperature, appliances, music and security, that learns my tastes and patterns, that can learn new words and concepts, and that can even entertain Max. It uses several artificial intelligence techniques, including natural language processing, speech recognition, face recognition, and reinforcement learning, written in Python, PHP and Objective C.
Vision and Face Recognition
About one-third of the human brain is dedicated to vision, and there are many important AI problems related to understanding what is happening in images and videos. These problems include tracking (eg is Max awake and moving around in her crib?), object recognition (eg is that Beast or a rug in that room?), and face recognition (eg who is at the door?).

Face recognition is a particularly difficult version of object recognition because most people look relatively similar compared to telling apart two random objects — for example, a sandwich and a house. But Facebook has gotten very good at face recognition for identifying when your friends are in your photos. That expertise is also useful when your friends are at your door and your AI needs to determine whether to let them in.

To do this, He installed a few cameras at my door that can capture images from all angles. AI systems today cannot identify people from the back of their heads, so having a few angles ensures we see the person’s face. He built a simple server that continuously watches the cameras and runs a two step process: first, it runs face detection to see if any person has come into view, and second, if it finds a face, then it runs face recognition to identify who the person is. Once it identifies the person, it checks a list to confirm I’m expecting that person, and if he is then it will let them in and tell him they’re here.

This type of visual AI system is useful for a number of things, including knowing when Max is awake so it can start playing music or a Mandarin lesson, or solving the context problem of knowing which room in the house we’re in so the AI can correctly respond to context-free requests like “turn the lights on” without providing a location. Like most aspects of this AI, vision is most useful when it informs a broader model of the world, connected with other abilities like knowing who your friends are and how to open the door when they’re here. The more context the system has, the smarter is gets overall.

AUTHOR: Sajad FarokhiYEAR: 2016
As a primary modality in biometrics, human face recognition has been employed widely in the computer vision domain because of its performance in a wide range of applications such as surveillance systems and forensics. Recently, near infrared (NIR) imagery has been used in many face recognition systems because of the high robustness to illumination changes in the acquired images. Even though some surveys have been conducted in this infrared domain, they have focused on thermal infrared methods rather than NIR methods. Furthermore, none of the previous infrared surveys provided comprehensive and critical analyses of NIR methods.
AUTHOR: Shonal ChaudhryYEAR: 2017
She present a visual assistive system that features mobile face detection and recognition in an unconstrained environment from a mobile source using convolutional neural networks. The goal of the system is to effectively detect individuals that approach facing towards the person equipped with the system. She found that face detection and recognition becomes a very difficult task due to the movement of the user which causes camera shakes resulting in motion blur and noise in the input for the visual assistive system. Due to the shortage of related datasets, she created a dataset of videos captured from a mobile source that features motion blur and noise from camera shakes. This makes the application a very challenging aspect of face detection and recognition in unconstrained environments. The performance of the convolutional neural network is further compared with a cascade classifier. The results show promising performance in daylight and artificial lighting conditions while the challenges lie for moonlight conditions with the need for reduction of false positives in order to develop a robust system. She also provided a framework for implementation of the system with smartphones and wearable devices for video input and auditory notification from the system to guide the visually impaired.

AUTHOR: HYPERLINK “https://www.sciencedirect.com/science/article/pii/S0925231215007250” l “!” Wen-Chun ChenYEAR: 2015
He used a face recognition algorithm to model differences in perception between autistic and non autistic children. With his model it is possible to reproduce several phenomena of autism by assuming that autistic children lack the ability to abstract from horizontal invariants. In particular, he can explain why autistic children are able to better recognize faces from parts of the face while the overall recognition of faces is worse than in non-autistic children. He would like to consider whether ASD may be the result of a version of a sophisticated perceptual system that makes less explicit use of invariants in the real world environment than the typically-developing brain. Some of these invariants may be hard-coded into the system rather than learned. The key point of our system is not the face recognition but the model which can mimic the autistic brain. In the discussion we extend the model by suggesting a general reduced ability to abstract from many different types of invariants and relate these as explanations to typical behavioral issues. In this way we hope to give a complementary insight into autism and ASD.

AUTHOR: Xiaodong ZhouYEAR: 2015
He designed and implemented an interactive open architecture computer vision software package called Ch OpenCV is presented. Benefiting from both Ch and OpenCV, Ch OpenCV has many salient features. It is interactive, capable of interface with binary static or dynamical C/C++ libraries, integrated with advanced numerical features and embeddable. It is especially suitable for rapid prototyping, web-based applications, and teaching and learning about computer vision.

AUTHOR: Chu-Sing YangYEAR: 2017
A strong edge descriptor is an important topic in a wide range of applications. Local binary pattern (LBP) techniques have been applied to numerous fields and are invariant with respect to luminance and rotation. However, the performance of LBP for optical character recognition is not as good as expected. In this study, he proposes a robust edge descriptor called improved LBP (ILBP), which is designed for optical character recognition. ILBP overcomes the noise problems observed in the original LBP by searching over scale space, which is implemented using an integral image with a reduced number of features to achieve recognition speed. In experiments, he evaluated ILBP’s performance on the ICDAR03, chars74K, IIIT5K, and Bib digital databases. The results show that ILBP is more robust to blur and noise than LBP.

AUTHOR: Jörg Schmalzl
YEAR: 2013
Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola–Jones learning algorithm, which is part of the open source library “OpenCV”. This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola–Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

AUTHOR: Vladimir ProtsenkoYEAR: 2017
This work describes performance analysis results of two face detection systems based on Apache Storm and IBM InfoSphere Streams frameworks. Profiling was evaluated on image sequences of four different sizes: 100 x 100, 640 x 640, 1920 x 1080, and 4096 x 3112. Face detection was performed by OpenCV cascade classifier. Experiment was held under five CentOS nodes cluster. It was investigated that system based on Apache Storm was able to operate in real-time at 24 frames per second on used hardware configuration. Apache Storm was more scalable and demonstrated advantage in throughput over its counterpart. Experiment helped to reveal configuration parameters of frameworks that played a major role in face detection task on image sequences.

AUTHOR: Yue-Wei Du
YEAR: 2018
Face recognition in harsh environments is an active research topic. As one of the most important challenges, face recognition across pose has received extensive attention. LBP feature has been used widely in face recognition because of its robustness to slight illumination and pose variations. However, due to the way of pattern feature calculation, its effectiveness is limited by the big rotations. In this paper, a new LBP-like feature extraction is proposed which modifies the code rule by Huffman. Besides, a Divide-and-Rule strategy is applied to both face representation and classification, which aims to improve recognition performance across pose. Extensive experiments on CMU PIE database, FERET database and LFW database are conducted to verify the efficacy of the proposed method. The experimental results show that our method significantly outperforms other approaches.

PROBLEM DEFINITION AND METHODOLOGIES
3.1 PROBLEM DEFINITION

Autism, social anxiety and social awkwardness are some of pressing issues amongst the youngsters in this generation. Not many are catering to these audiences. We’ve relied too much on traditional screens and form based interfaces for authentication and communication online. Traditional messaging applications like Messenger and WhatsApp are curated to be addictive but not to solve the problem. Face Recognition technologies have only so far, been used for authentication and security reasons. With this project, we’ve tried new and that is to try using this same technology for other use cases as well.
EXISTING SYSTEM
In the existing system there exist only traditional, form-based textual instant messaging applications like Sarahah and Ask.fm. Ask.fm is a global social networking site where users create profiles and can send each other questions. It was once a form of anonymous social media that encouraged questions to be submitted anonymously. Sarahah was meant for workers to compliment their bosses.

3.3 PROPOSED SYSTEM
Face recognition for login/signup
Modern, refreshing UI
Utilizes LBP for Face Recognition with OpenCV
Innovative animations for loaders and transitions
The ability to keep the name of the person anonymous/public.

Emojis support in the instant messaging
3.4 LOCAL BINARY PATTERN
Local Binary Patterns, or LBPs for short, are a texture descriptor made popular by the work of Ojala et al. in their 2002 paper, Multiresolution Grayscale and Rotation Invariant Texture Classification with Local Binary Patterns (although the concept of LBPs were introduced as early as 1993).

Unlike Haralick texture features that compute a global representation of texture based on the Gray Level Co-occurrence Matrix, LBPs instead compute a local representation of texture. This local representation is constructed by comparing each pixel with its surrounding neighborhood of pixels.

3.4.1 LBP ALGORITHM- STEPS & IMPLEMENTATION
The steps to be followed are as follows:
Convert the image to grayscale.

2. For each pixel in the grayscale image, we select a neighborhood of size r surrounding the center pixel.

3. Construct A LBP value is then calculated for this center pixel and stored in the output 2D array with the same width and height as the input image.

4. From there, we need to calculate the LBP value for the center pixel. We can start from any neighboring pixel and work our way clockwise or counter-clockwise, but our ordering must be kept consistent for all pixels in our image and all images in our dataset.

5. Given a 3 x 3neighborhood, we thus have 8 neighbors that we must perform a binary test on. The results of this binary test are stored in an 8-bit array, which we then convert to decimal.

6. This value is stored in the output LBP 2D array, which we can then visualize.

7. This process of thresholding, accumulating binary strings, and storing the output decimal value in the LBP array is then repeated for each pixel in the input image.

8. The last step is to compute a histogram over the output LBP array. Since a 3 x 3 neighborhood has 2 ^ 8 = 256 possible patterns, our LBP 2D array thus has a minimum value of 0 and a maximum value of 255, allowing us to construct a 256-bin histogram of LBP codes as our final feature vector
3.4.2 IMPLEMENTATION OF LBP WITH OPENCV & PYTHON
The structure of our project is as follows:
— pyimagesearch|    |— localbinarypatterns.py
|— recognize.py
The code is as follows:
Descriptor class
# import the necessary packages
from skimage import feature
import numpy as np
 
class LocalBinaryPatterns:
def __init__(self, numPoints, radius):
# store the number of points and radius
self.numPoints = numPoints
self.radius = radius
 
def describe(self, image, eps=1e-7):
# compute the Local Binary Pattern representation
# of the image, and then use the LBP representation
# to build the histogram of patterns
lbp = feature.local_binary_pattern(image, self.numPoints,
self.radius, method=”uniform”)
(hist, _) = np.histogram(lbp.ravel(),
bins=np.arange(0, self.numPoints + 3),
range=(0, self.numPoints + 2))
 
# normalize the histogram
hist = hist.astype(“float”)
hist /= (hist.sum() + eps)
 
# return the histogram of Local Binary Patterns
return hist
Create a new file named recognize.py
: # import the necessary packages
from pyimagesearch.localbinarypatterns import LocalBinaryPatterns
from sklearn.svm import LinearSVC
from imutils import paths
import argparse
import cv2
 
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument(“-t”, “–training”, required=True,
help=”path to the training images”)
ap.add_argument(“-e”, “–testing”, required=True,
help=”path to the tesitng images”)
args = vars(ap.parse_args())
 
# initialize the local binary patterns descriptor along with
# the data and label lists
desc = LocalBinaryPatterns(24, 8)
data =
labels =
DESIGN PROCESS
4.1 SYSTEM REQUIREMENTS
This mobile application runs on both- iOS and Android devices. The efficiency of recognition depends upon the clarity of the picture obtained from the device’s camera.

4.1.1 Hardware Requirements
For iOS Devices:
Supported on iPhones 5S and later, iPads 5th generation and later
For Android Devices:
Supported on all Android devices starting with version Android Marshmallow
4.1.2 Software Requirements
S.NO SOFTWARE SPECIFICATION
1 Languages used
Java, Swift, Python, XML
2 Tools Xcoode, Android Studio, Terminal, Heroku, FTP
3 Backend NodeJS, Ember, Python, Cocoapods, Firebase DB
TABLE 4.1 Software Requirements
4.2 FLOW DIAGRAM
Flow diagram is a collective term for a diagram representing a flow or set of dynamic relationships in a system.

Fig 4.1 FLOW DIAGRAM
MODULE DESCRIPTION
The lists of modules are as follows:
Heroku Host
Firebase DB
App or Front – End

FIG 4.2 MODULE DESCRIPTION
Heroku Host
Heroku Architecture
Defining an application
Heroku lets you deploy, run and manage applications written in Ruby, Node.js, Java, Python, Clojure, Scala, Go and PHP.

An application is a collection of source code written in one of these languages, perhaps a framework, and some dependency description that instructs a build system as to which additional dependencies are needed in order to build and run the application.

Dependency mechanisms vary across languages: in Ruby you use a Gemfile, in Python a requirements.txt, in Node.js a package.json, in Java a pom.xml and so on.

The source code for your application, together with the dependency file, should provide enough information for the Heroku platform to build your application, to produce something that can be executed.

Knowing what to executeYou don’t need to make many changes to an application in order to run it on Heroku. One requirement is informing the platform as to which parts of your application are runnable.

If you’re using some established framework, Heroku can figure it out. For example, in Ruby on Rails, it’s typically rails server, in Django it’s python ;app;/manage.py runserver and in Node.js it’s the main field in package.json.

For other applications, you may need to explicitly declare what can be executed. You do this in a text file that accompanies your source code – a Procfile. Each line declares a process type – a named command that can be executed against your built application. For example, your Procfile may look like this:
web: java -jar lib/foobar.jar $PORT
queue: java -jar lib/queue-processor.jar
This file declares a web process type and provides the command that needs to be executed in order to run it (in this case, java -jar lib/foobar.jar $PORT). It also declares a queue process type, and its corresponding command.

The earlier definition of an application can now be refined to include this single additional Procfile.

Heroku is a polyglot platform – it lets you build, run and scale applications in a similar manner across all the languages – utilizing the dependencies and Procfile. The Procfile exposes an architectural aspect of your application (in the above example there are two entry points to the application) and this architecture lets you, for example, scale each part independently. An excellent guide to architecture principles that work well for applications running on Heroku can be found in Architecting Applications for Heroku.

Deploying applicationsGit is a powerful, distributed version control system that many developers use to manage and version source code. The Heroku platform uses Git as the primary means for deploying applications (there are other ways to transport your source code to Heroku, including via an API).

When you create an application on Heroku, it associates a new Git remote, typically named heroku, with the local Git repository for your application.

As a result, deploying code is just the familiar git push, but to the heroku remote instead:
$ git push heroku master
Terminology: Deploying applications involves sending the application to Heroku using either Git, GitHub, Dropbox, or via an API.

There are many other ways of deploying applications too. For example, you can enable GitHub integration so that each new pull request is associated with its own new application, which enables all sorts of continuous integration scenarios. Or you can use Dropbox Sync, which lets you deploy the contents of Dropbox folders to Heroku. Finally, you can also use the Heroku API to build and release apps.

Deployment then, is about moving your application from your local system to Heroku – and Heroku provides several ways in which apps can be deployed.

Building applicationsWhen the Heroku platform receives the application source, it initiates a build of the source application. The build mechanism is typically language specific, but follows the same pattern, typically retrieving the specified dependencies, and creating any necessary assets (whether as simple as processing style sheets or as complex as compiling code).

For example, when the build system receives a Rails application, it may fetch all the dependencies specified in the Gemfile, as well as generate files based on the asset pipeline. A Java application may fetch binary library dependencies using Maven, compile the source code together with those libraries, and produce a JAR file to execute.

The source code for your application, together with the fetched dependencies and output of the build phase such as generated assets or compiled code, as well as the language and framework, are assembled into a slug.

These slugs are a fundamental aspect of what happens during application execution – they contain your compiled, assembled application – ready to run – together with the instructions (the Procfile) of what you may want to execute.

Running applications on dynosHeroku executes applications by running a command you specified in the Procfile, on a dyno that’s been preloaded with your prepared slug (in fact, with your release, which extends your slug and a few items not yet defined: config vars and add-ons).

Think of a running dyno as a lightweight, secure, virtualized Unix container that contains your application slug in its file system.

Terminology: Dynos are isolated, virtualized Unix containers, that provide the environment required to run an application.

Generally, if you deploy an application for the first time, Heroku will run 1 web dyno automatically. In other words, it will boot a dyno, load it with your slug, and execute the command you’ve associated with the web process type in your Procfile.

You have control over how many dynos are running at any given time. Given the Procfile example earlier, you can start 5 dynos, 3 for the web and 2 for the queue process types, as follows:
$ heroku ps:scale web=3 queue=2
When you deploy a new version of an application, all of the currently executing dynos are killed, and new ones (with the new release) are started to replace them – preserving the existing dyno formation.

To understand what’s executing, you just need to know what dynos are running which process types:
$ heroku ps
== web: ‘java lib/foobar.jar $PORT’
web.1: up 2013/02/07 18:59:17 (~ 13m ago)
web.1: up 2013/02/07 18:52:08 (~ 20m ago)
web.2: up 2013/02/07 18:31:14 (~ 41m ago)
== queue: `java lib/queue-processor.jar`
queue.1: up 2013/02/07 18:40:48 (~ 32m ago)
queue.2: up 2013/02/07 18:40:48 (~ 32m ago)
Dynos then, are an important means of scaling your application. In this example, the application is well architected to allow for the independent scaling of web and queue worker dynos.

Config varsAn application’s configuration is everything that is likely to vary between environments (staging, production, developer environments, etc.). This includes backing services such as databases, credentials, or environment variables that provide some specific information to your application.

Heroku lets you run your application with a customizable configuration – the configuration sits outside of your application code and can be changed independently of it.

The configuration for an application is stored in config vars. For example, here’s how to configure an encryption key for an application:
$ heroku config:set ENCRYPTION_KEY=my_secret_launch_codes
Adding config vars and restarting demoapp… done, v14
ENCRYPTION_KEY: my_secret_launch_codes
At runtime, all of the config vars are exposed as environment variables – so they can be easily extracted programatically. A Ruby application deployed with the above config var can access it by calling ENV”ENCRYPTION_KEY”.

All dynos in an application will have access to the exact same set of config vars at runtime.

ReleasesEarlier, this article stated that to run your application on a dyno, the Heroku platform loaded the dyno with your most recent slug. This needs to be refined: in fact it loads it with the slug and any config variables you have assigned to the application. The combination of slug and configuration is called a release.

All releases are automatically persisted in an append-only ledger, making managing your application, and different releases, a cinch. Use the heroku releases command to see the audit trail of release deploys:
$ heroku releases
== demoapp Releases
v103 Deploy 582fc95 [email protected] 2013/01/31 12:15:35
v102 Deploy 990d916 [email protected] 2013/01/31 12:01:12
The number next to the deploy message, for example 582fc95, corresponds to the commit hash of the repository you deployed to Heroku.

Every time you deploy a new version of an application, a new slug is created and release is generated.

As Heroku contains a store of the previous releases of your application, it’s very easy to rollback and deploy a previous release:
$ heroku releases:rollback v102
Rolling back demoapp… done, v102
$ heroku releases
== demoapp Releases
v104 Rollback to v102 [email protected] 2013/01/31 14:11:33 (~15s ago)
v103 Deploy 582fc95 [email protected] 2013/01/31 12:15:35
v102 Deploy 990d916 [email protected] 2013/01/31 12:01:12
Making a material change to your application, whether it’s changing the source or configuration, results in a new release being created.

A release then, is the mechanism behind how Heroku lets you modify the configuration of your application (the config vars) independently of the application source (stored in the slug) – the release binds them together. Whenever you change a set of config vars associated with your application, a new release will be generated.

Dyno managerPart of the Heroku platform, the dyno manager, is responsible for keeping dynos running. For example, dynos are cycled at least once per day, or whenever the dyno manager detects a fault in the running application (such as out of memory exceptions) or problems with the underlying hardware that requires the dyno be moved to a new physical location.

Terminology: The dyno manager of the Heroku platform is responsible for managing dynos across all applications running on Heroku.

This dyno cycling happens transparently and automatically on a regular basis, and is logged.

Terminology: Applications that use the free dyno type will sleep. When a sleeping application receives HTTP traffic, it will be awakened – causing a delay of a few seconds. Using one of the other dyno types will avoid sleeping.

Because Heroku manages and runs applications, there’s no need to manage operating systems or other internal system configuration. One-off dynos can be run with their input/output attached to your local terminal. These can also be used to carry out admin tasks that modify the state of shared resources, for example database configuration – perhaps periodically through a scheduler.

Here’s the simplest way to create and attach to a one-off dyno:
$ heroku run bash
Running `bash` attached to terminal… up, run.8963
~ $ ls
This will spin up a new dyno, loaded with your release, and then run the bash command – which will provide you with a Unix shell (remember that dynos are effectively isolated virtualized Unix containers). Once you’ve terminated your session, or after a period of inactivity, the dyno will be removed.

Changes to the filesystem on one dyno are not propagated to other dynos and are not persisted across deploys and dyno restarts. A better and more scalable approach is to use a shared resource such as a database or queue..The ephemeral nature of the file system in a dyno can be demonstrated with the above command. If you create a one-off dyno by running heroku run bash, the Unix shell on the dyno, and then create a file on that dyno, and then terminate your session – the change is lost. All dynos, even those in the same application, are isolated – and after the session is terminated the dyno will be killed. New dynos are always created from a slug, not from the state of other dynos.

Add-onsApplications typically make use of add-ons to provide backing services such as databases, queueing & caching systems, storage, email services and more. Add-ons are provided as services by Heroku and third parties – there’s a large marketplace of add-ons you can choose from.

Heroku treats these add-ons as attached resources: provisioning an add-on is a matter of choosing one from the add-on marketplace, and attaching it to your application.

For example, here is how to add the Heroku Redis backing store add-on to an application:
$ heroku addons:create heroku-redis:hobby-dev
Dynos do not share file state, and so add-ons that provide some kind of storage are typically used as a means of communication between dynos in an application. For example, Redis or Postgres could be used as the backing mechanism in a queue; then dynos of the web process type can push job requests onto the queue, and dynos of the queue process type can pull jobs requests from the queue.

The add-on service provider is responsible for the service – and the interface to your application is often provided through a config var. In this example, a REDIS_URL will be automatically added to your application when you provision the add-on. You can write code that connects to the service through the URL, for example:
uri = URI.parse(ENV”REDIS_URL”)
REDIS = Redis.new(:host =; uri.host, :port =; uri.port, :password =; uri.password)
Add-ons are associated with an application, much like config vars – and so the earlier definition of a release needs to be refined. A release of your applications is not just your slug and config vars; it’s your slug, config vars as well as the set of provisioned add-ons.

Logging and monitoring Heroku treats logs as streams of time-stamped events, and collates the stream of logs produced from all of the processes running in all dynos, and the Heroku platform components, into the Logplex – a high-performance, real-time system for log delivery.

It’s easy to examine the logs across all the platform components and dynos:
$ heroku logs
2013-02-11T15:19:10+00:00 herokurouter: at=info method=GET path=/articles/custom-domains host=mydemoapp.heroku.com fwd=74.58.173.188 dyno=web.1 queue=0 wait=0ms connect=0ms service=1452ms status=200 bytes=5783
2013-02-11T15:19:10+00:00 appweb.2: Started GET “/” for 1.169.38.175 at 2013-02-11 15:19:10 +0000
2013-02-11T15:19:10+00:00 appweb.1: Started GET “/” for 2.161.132.15 at 2013-02-11 15:20:10 +0000
Here you see 3 timestamped log entries, the first from Heroku’s router, the last two from two dynos running the web process type. other components such as the routers, providing a single source of activity.

You can also dive into the logs from just a single dyno, and keep the channel open, listening for further events:
$ heroku logs –ps web.1 –tail
2013-02-11T15:19:10+00:00 appweb.1: Started GET “/” for 1.169.38.175 at 2013-02-11 15:19:10 +0000
Logplex keeps a limited buffer of log entries solely for performance reasons. To persist them, and action events such as email notification on exception, use a Logging Add-on, which ties into log drains – an API for receiving the output from Logplex.

HTTP routingDepending on your dyno formation, some of your dynos will be running the command associated with the web process type, and some will be running other commands associated with other process types.

The dynos that run process types named web are different in one way from all other dynos – they will receive HTTP traffic. Heroku’s HTTP routers distribute incoming requests for your application across your running web dynos.

So scaling an app’s capacity to handle web traffic involves scaling the number of web dynos:
$ heroku ps:scale web+5
A random selection algorithm is used for HTTP request load balancing across web dynos – and this routing handles both HTTP and HTTPS traffic. It also supports multiple simultaneous connections, as well as timeout handling.

Tying it all togetherThe concepts explained here can be divided into two buckets: those that involve the development and deployment of an application, and those that involve the runtime operation of the Heroku platform and the application after it’s deployed.

The following two sections recapitulate the main components of the platform, separating them into these two buckets.

DeployApplications consist of your source code, a description of any dependencies, and a Procfile.

Procfiles list process types – named commands that you may want executed.

Deploying applications involves sending the application to Heroku using either Git, GitHub, Dropbox, or via an API.

Buildpacks lie behind the slug compilation process. Buildpacks take your application, its dependencies, and the language runtime, and produce slugs.

A slug is a bundle of your source, fetched dependencies, the language runtime, and compiled/generated output of the build system – ready for execution.

Config vars contain customizable configuration data that can be changed independently of your source code. The configuration is exposed to a running application via environment variables.

Add-ons are third party, specialized, value-added cloud services that can be easily attached to an application, extending its functionality.

A release is a combination of a slug (your application), config vars and add-ons. Heroku maintains an append-only ledger of releases you make.

RuntimeDynos are isolated, virtualized Unix containers, that provide the environment required to run an application.

Your application’s dyno formation is the total number of currently-executing dynos, divided between the various process types you have scaled.

The dyno manager is responsible for managing dynos across all applications running on Heroku.

Applications that use the free dyno type will sleep after 30 minutes of inactivity. Scaling to multiple web dynos, or a different dyno type, will avoid this.

One-off Dynos are temporary dynos that run with their input/output attached to your local terminal. They’re loaded with your latest release.

Each dyno gets its own ephemeral filesystem – with a fresh copy of the most recent release. It can be used as temporary scratchpad, but changes to the filesystem are not reflected to other dynos.

Logplex automatically collates log entries from all the running dynos of your app, as well as other components such as the routers, providing a single source of activity.

Scaling an application involves varying the number of dynos of each process type.

User interface
The user’s information is stored in the Firebase db. The unique ID in the DB is used to uniquely identify a person in the DB. The UI was designed with XML for Android and Swift and Storyboards for iOS.
OpenCV
OpenCV (Open Source Computer Vision) is a popular computer vision library started by Intel in 1999. The cross-platform library sets its focus on real-time image processing and includes patent-free implementations of the latest computer vision algorithms. In 2008 Willow Garage took over support and OpenCV 2.3.1 now comes with a programming interface to C, C++, Python and Android. OpenCV is released under a BSD license so it is used in academic projects and commercial products alike.

OpenCV 2.4 now comes with the very new FaceRecognizer class for face recognition, so we can start experimenting with face recognition right away.
LBP With OpenCV
Eigenfaces and Fisherfaces take a somewhat holistic approach to face recognition. You treat your data as a vector somewhere in a high-dimensional image space. We all know high-dimensionality is bad, so a lower-dimensional subspace is identified, where (probably) useful information is preserved. The Eigenfaces approach maximizes the total scatter, which can lead to problems if the variance is generated by an external source, because components with a maximum variance over all classes aren’t necessarily useful for classification. So to preserve some discriminative information we applied a Linear Discriminant Analysis and optimized as described in the Fisherfaces method. The Fisherfaces method worked great… at least for the constrained scenario we’ve assumed in our model.

Now real life isn’t perfect. You simply can’t guarantee perfect light settings in your images or 10 different images of a person. So what if there’s only one image for each person? Our covariance estimates for the subspace may be horribly wrong, so will the recognition. Remember the Eigenfaces method had a 96% recognition rate on the AT;T Facedatabase? How many images do we actually need to get such useful estimates? Here are the Rank-1 recognition rates of the Eigenfaces and Fisherfaces method on the AT;T Facedatabase, which is a fairly easy image database:

FIG 4.3 Image Database Graphs
So in order to get good recognition rates you’ll need at least 8(+-1) images for each person and the Fisherfaces method doesn’t really help here. The above experiment is a 10-fold cross validated result carried out with the facerec framework at: https://github.com/bytefish/facerec. This is not a publication, so I won’t back these figures with a deep mathematical analysis. Please have a look into KM01 for a detailed analysis of both methods, when it comes to small training datasets.

So some research concentrated on extracting local features from images. The idea is to not look at the whole image as a high-dimensional vector, but describe only local features of an object. The features you extract this way will have a low-dimensionality implicitly. A fine idea! But you’ll soon observe the image representation we are given doesn’t only suffer from illumination variations. Think of things like scale, translation or rotation in images – your local description has to be at least a bit robust against those things. Just like SIFT, the Local Binary Patterns methodology has its roots in 2D texture analysis. The basic idea of Local Binary Patterns is to summarize the local structure in an image by comparing each pixel with its neighborhood. Take a pixel as center and threshold its neighbors against. If the intensity of the center pixel is greater-equal its neighbor, then denote it with 1 and 0 if not. You’ll end up with a binary number for each pixel, just like 11001111. So with 8 surrounding pixels you’ll end up with 2^8 possible combinations, called Local Binary Patterns or sometimes referred to as LBP codes. The first LBP operator described in literature actually used a fixed 3 x 3 neighborhood just like this:

FIG 4.4 LBP PROCESS
IMPLEMENTATION
PI Installation: (local machine)
Build and run the project in Xcode to test the App.

To test out the Facial recognition API in local machine follow the steps given below:
1.change directory to webservice dir: cd FacePass Webservice
2.create a virual environment: virtualenv facepassenv
3.activate virtualenv: source facepassenv/bin/activate
4.install virtualenv: pip install virtualenv
5.install the required dependencies: pip install -r requirements.txt
this will install the required dependencies for the project automatically
6.Now run the flask app using: python runApp.py
Now you will get something like this:
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
Now open any browser you like and type this in url: localhost:5000you will be welcomed with the project name.

At this stage the project is working fine. you are ready to test the Facial recognition API
API Calls
we recommend you to use Postman App for making API calls
obtaining UID
URL:localhost:5000/getUID/
METHOD:GETThis API will return unique ID(this will generated based on the models already present), this ID should be included in other API calls too. save this UID somewhere temporary or if you are using some database first save this UID and the user name and data. it is recommended to use this UID instead of your own generated UID to prevent the overwriting of other users models.

Storing Face to the Server (in our case in local machine)
URL:localhost:5000/storeFace/img/id/<UID>/
METHOD:POSTPARAMS:
UID : your UID obtained from getUID API
binary data : image in .jpg format
This API will take UID(which we obtained before) parameter and image (.jpg format) through post method. we recommend you to use postman app to easily upload images. This API simply stores your face in the server under the specified UID. The photos are saved in FacePass Webservice/model/
Detecting Faces from the uploaded photos
URL:localhost:5000/detectFaces/id/<UID>/
METHOD:POSTPARAMS:
UID: your UID obtained from getUID API
binary data : image in .jpg format
This API will take UID(which we obtained before) parameter and image (.jpg format) through post method. we recommend you to use postman app to easily upload images. This API detects and extracts your face in the uploaded/saved photos of the specified UID and saves in the dataset folder ==> ( FacePass Webservice/dataset/ )
Training faces
URL:localhost:5000/train/
METHOD:GETThis API will train the faces stored in the dataset folder and creates a .yaml file that is used to store the data for face recognition.

Recognizing faces
URL:localhost:5000/recognize/
METHOD:POSTPARAMS:
binary data : image in .jpg format
This API will take image (.jpg format) through post method. we recommend you to use postman app to easily upload images. This API detects and Recognizes the face from the photo given to the API.

5.1 iOS integration with Firebase
Create a Firebase project in the Firebase console, if you don’t already have one. If you already have an existing Google project associated with your mobile app, click Import Google Project. Otherwise, click Add project.

Click Add Firebase to your iOS app and follow the setup steps. If you’re importing an existing Google project, this may happen automatically and you can just download the config file.

When prompted, enter your app’s bundle ID. It’s important to enter the bundle ID your app is using; this can only be set when you add an app to your Firebase project.

At the end, you’ll download a GoogleService-Info.plist file. You can download this file again at any time.

If you haven’t done so already, add this file to your Xcode project root using the Add Files utility in Xcode (From the File menu, click Add Files). Make sure the file is included in your app’s build target.

Add the SDK
If you are setting up a new project, you need to install the SDK. You may have already completed this as part of creating a Firebase project.

We recommend using CocoaPods to install the libraries. You can install Cocoapods by following the installation instructions. If you’d rather not use CocoaPods, you can integrate the SDK frameworks directly without using CocoaPods.

If you are planning to download and run one of the quickstart samples, the Xcode project and Podfile are already present, but you’ll still need to install the pods and download the GoogleService-Info.plist file. If you would like to integrate the Firebase libraries into one of your own projects, you will need to add the pods for the libraries that you want to use.

If you don’t have an Xcode project yet, create one now.

Create a Podfile if you don’t have one:
$ cd your-project directory$ pod init
Add the pods that you want to install. You can include a Pod in your Podfile like this:
pod ‘Firebase/Core’
This will add the prerequisite libraries needed to get Firebase up and running in your iOS app, along with Google Analytics for Firebase. A list of currently available pods and subspecs is provided below. These are linked in feature specific setup guides as well.

Install the pods and open the .xcworkspace file to see the project in Xcode.

$ pod install$ open your-project.xcworkspace
Download a GoogleService-Info.plist file from Firebase console and include it in your app.

Add Firebase to your iOS Project
Prerequisites
Before you begin, you need a few things set up in your environment:
Xcode 8.0 or later
An Xcode project targeting iOS 8 or above
Swift projects must use Swift 3.0 or later
The bundle identifier of your app
CocoaPods 1.2.0 or later
For Cloud Messaging:
A physical iOS device
An Apple Push Notification Authentication Key for your Apple Developer account
In Xcode, enable Push Notifications in App ; Capabilities
If you don’t have an Xcode project already, you can download one of our quickstart samples if you just want to try a Firebase feature. If you’re using a quickstart, remember to get the bundle identifier from the project settings, you’ll need it for the next step.

Support for iOS 7 deprecated: As of v4.5.0 of the Firebase SDK for iOS, support for iOS 7 is deprecated. Upgrade your apps to target iOS 8 or above. To see the breakdown of worldwide iOS versions, go to Apple’s App Store support page.

Add Firebase to your app
It’s time to add Firebase to your app. To do this you’ll need a Firebase project and a Firebase configuration file for your app.

Create a Firebase project in the Firebase console, if you don’t already have one. If you already have an existing Google project associated with your mobile app, click Import Google Project. Otherwise, click Add project.

Click Add Firebase to your iOS app and follow the setup steps. If you’re importing an existing Google project, this may happen automatically and you can just download the config file.

When prompted, enter your app’s bundle ID. It’s important to enter the bundle ID your app is using; this can only be set when you add an app to your Firebase project.

At the end, you’ll download a GoogleService-Info.plist file. You can download this file again at any time.

If you haven’t done so already, add this file to your Xcode project root using the Add Files utility in Xcode (From the File menu, click Add Files). Make sure the file is included in your app’s build target.

Note: If you have multiple build variants with different bundle IDs defined, each app must be added to your project in Firebase console.

Add the SDK
If you are setting up a new project, you need to install the SDK. You may have already completed this as part of creating a Firebase project.

We recommend using CocoaPods to install the libraries. You can install Cocoapods by following the installation instructions. If you’d rather not use CocoaPods, you can integrate the SDK frameworks directly without using CocoaPods.

If you are planning to download and run one of the quickstart samples, the Xcode project and Podfile are already present, but you’ll still need to install the pods and download the GoogleService-Info.plist file. If you would like to integrate the Firebase libraries into one of your own projects, you will need to add the pods for the libraries that you want to use.

If you don’t have an Xcode project yet, create one now.

Create a Podfile if you don’t have one:
$ cd your-project directory$ pod init
Add the pods that you want to install. You can include a Pod in your Podfile like this:
pod ‘Firebase/Core’
This will add the prerequisite libraries needed to get Firebase up and running in your iOS app, along with Google Analytics for Firebase. A list of currently available pods and subspecs is provided below. These are linked in feature specific setup guides as well.

Install the pods and open the .xcworkspace file to see the project in Xcode.

$ pod install$ open your-project.xcworkspace
Download a GoogleService-Info.plist file from Firebase console and include it in your app.

Note: If you have multiple bundle IDs in your project, each bundle ID must be connected in Firebase console so it can have its own GoogleService-Info.plist file.

Initialize Firebase in your app
The final step is to add initialization code to your application. You may have already done this as part of adding Firebase to your app. If you are using a quickstart this has been done for you.

Import the Firebase module in your UIApplicationDelegate
import Firebase
Configure a FirebaseApp shared instance, typically in your application’s application:didFinishLaunchingWithOptions: method:
// Use Firebase library to configure APIsFirebaseApp.configure() 5.2 Xcode
Xcode is an IDE for MacOS, iOS, watchOS and tvOS platforms developed by Apple. Xcode includes most of the developer documentation from Apple and Interface Builder – an application used to create graphical interfaces.

5.3 ANDROID STUDIO
Android Studio is the official Integrated Development Environment (IDE) for Android app development, based on IntelliJ IDEA . On top of IntelliJ’s powerful code editor and developer tools, Android Studio offers even more features that enhance your productivity when building Android apps, such as:
A flexible Gradle-based build system
A fast and feature-rich emulator
A unified environment where you can develop for all Android devices
Instant Run to push changes to your running app without building a new APK
Code templates and GitHub integration to help you build common app features and import sample code
Extensive testing tools and frameworks
Lint tools to catch performance, usability, version compatibility, and other problems
C++ and NDK support
Built-in support for Google Cloud Platform, making it easy to integrate Google Cloud Messaging and App Engine
5.4 OVERALL WORKING OF THE DEVICE
– If you are a first time user, you need to register your face using a few simple steps (Fig)
– Simply show your face- tilted left, right, straight and then, give yourself a name. You can choose to be displayed your name in the profile (which is displayed when somebody scans your face) or let others ask you your name.

– After registration, you’ll need to simply position the person’s face (whom you want to talk to) in the face frame of the camera.

– Confirm the identity of the person that Facepass recognized and tap confirm
– You can now leave messages with emojis on their profile
– Your friend or stranger will scan his/her own face and read the comments
– That’s it!

5.5 USER REGISTRATION
Simply show your face- tilted left, right, straight and then, give yourself a name. You can choose to be displayed your name in the profile (which is displayed when somebody scans your face) or let others ask you your name.

FIG 5.1 REGISTRATION SCREEN

FIG 5.2 REGISTRATION GUIDE

FIG 5.3 USER REGISTRATION
5.6 MESSAGING MODULE
The screenshot above demonstrates the anonymous messaging interface. The user who is engaging in the conversation will not store his identity. The name of the person scanned here may or not be displayed here on the menu bar depending on visibility preference that the user has specified.

FIG 5.4 MESSAGING MODULE
6. CONCLUSION AND FUTURE WORK
6.1 CONCLUSION
Autism, shyness and social anxiety are some of the most commonly witnessed mental and social backwardness that youngsters today encounter with the existing social media. Every social media today are too addictive and delusional. This project leverage facial recognition- an untapped, futuristic, mostly security-oriented technology for authentication to simplify the process of communication. Anonymous messaging makes it easier for the socially challenged individuals to express themselves properly.

6.2 FUTURE ENHANCEMENT
In future, we can open-source this technology in the form an API and a SaaS service for other developers to utilize this technology without the need for developing this backend. iPhone X has the hardware for tru-depth sensing technology which means if we utilize this technology, we have the possibility for distinguishing between identical twins as well.
REFERENCES
1.Mark Zuckerberg – Building Jarvis. 2016
“https://www.facebook.com/notes/mark-zuckerberg/building-jarvis/10154361492931634/”
2.Sajad Farokhi – Near infrared face recognition: A literature survey. 2016 “https://www.sciencedirect.com/science/article/pii/S1574013716300673”
3.Shonal Chaudhry – Face detection and recognition in an unconstrained environment for mobile visual assistive system.

“https://www.sciencedirect.com/science/article/pii/S1568494616306603”
4.Wen-Chun Chen – A face recognition system that simulates perception impairments of autistic children. 2015
“https://www.sciencedirect.com/science/article/pii/S0925231215007250”
5.Xiaodong Zhou – Ch OpenCV for interactive open architecture computer vision. 2015
“https://www.sciencedirect.com/science/article/pii/S0965997804000626”
6.Chu-Sing Yang – Improved local binary pattern for real scene optical character recognition. 2017
“https://www.sciencedirect.com/science/article/pii/S016786551730260X”
7.Jörg Schmalzl – Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar. 2013
“https://www.sciencedirect.com/science/article/pii/S009830041300112X”
8.Vladimir Protsenko – Performance analysis of real-time face detection system based on stream data mining frameworks. 2017
“https://www.sciencedirect.com/science/article/pii/S1877705817341322#!”
9.Yue-Wei Du – Pose-robust face recognition with Huffman-LBP enhanced by Divide-and-Rule strategy. 2018
“https://www.sciencedirect.com/science/article/pii/S0031320318300050”
APPENDIX
A1-SOURCE CODE
AppColors.swift
importFoundation
import UIKit
struct FPColors {
static let green = UIColor(red: 123/255, green: 229/255, blue: 163/255, alpha: 1)
static let blue = UIColor(red: 72/255, green: 151/255, blue: 181/255, alpha: 1)
static let lightBlue = UIColor(red: 85/255, green: 172/255, blue: 199/255, alpha: 1)
static let lightBlue2 = UIColor(red: 102/255, green: 200/255, blue: 206/255, alpha: 1)
static let messageViewColor = UIColor(red: 229/255, green: 234/255, blue: 245/255, alpha: 1)
static let shadowColor = UIColor(red: 57/255, green: 81/255, blue: 142/255, alpha: 1).cgColor
static var gray = UIColor.init(red: 70/255, green: 72/255, blue: 90/255, alpha: 0.6)
}
FPValues.swift
importUIKit
class FPValues {
class urls {
private static var host = “https://facepass2.herokuapp.com” //”http://192.168.43.145:5000″ //”https://facepass2.herokuapp.com/” //”http://192.168.1.2:5000″
static var storeFace = “(host)/storeFace/img/id/”
static var detectFace = “(host)/detectFaces/id/”
static var recognize = “(host)/recognize/”
static var train = “(host)/train/”
static var getUID = “(host)/getUID/”
}

class alertMessages {
static var first = “Hello, Wonderful! Please look at the camera. stay normal and don’t smile.”
static var second = “Remain looking at the camera but please smile this time.”
static var third = “Slowly tilt your face to the left”
static var four = “Slowly tilt your face to the right.”
}

}
Face Detection enums.swift
import Foundation
enum faceCaptureMode {
case Normal , Smile , LTilt , RTilt , none , upload
}
enum captureMode{
case detection,recognition
}
enum cameraMode {
case front,back
}
Messages.swift
import UIKit
struct Messages {
var messageBody:String
var time:String
}
ConfirmFaceViewController.swift
import UIKit
import EasyPeasy
import SDWebImage
class ConfirmFaceViewController: UIViewController {
override var preferredStatusBarStyle: UIStatusBarStyle {
return .lightContent
}

var UID = 0
var profImgURl = “”

let NextButton:FPButton = {
let button = FPButton()
button.setTitle(“Confirm”, for: .normal)
button.backgroundColor = FPColors.blue
return button
}()

let backButton:UIButton = {
let button = UIButton()
button.setImage(#imageLiteral(resourceName: “back”), for: .normal)
return button
}()

let profilePic:UIImageView = {
let imageview = UIImageView()
imageview.image = #imageLiteral(resourceName: “prof-ph”)
imageview.clipsToBounds=true
imageview.contentMode = UIViewContentMode.scaleAspectFill
return imageview
}()

let titleLabel:UILabel = {
let label = UILabel()
label.text = “Confirm Face”
label.font = UIFont.systemFont(ofSize: UIFont.labelFontSize + 5)
label.textAlignment = .center
//42 55 85
label.textColor = UIColor.init(red: 42/255, green: 55/255, blue: 85/255, alpha: 1)
return label
}()

let usernameLabel:UILabel = {
let label = UILabel()
label.font = UIFont.systemFont(ofSize: UIFont.systemFontSize + 10)
label.textColor = FPColors.blue
label.textAlignment = .center
return label
}()

let descriptionLabel:UITextView = {
let label = UITextView()
label.text = “Have we recognized the right person?”
label.textColor = UIColor.init(red: 92/255, green: 102/255, blue: 125/255, alpha: 1)
label.font = UIFont.systemFont(ofSize:UIFont.labelFontSize – 2)
label.isEditable = false
label.isScrollEnabled = false
label.textAlignment = .center
return label
}()

override func viewDidLoad() {
self.view.addSubviews(views: NextButton,backButton,profilePic,titleLabel,descriptionLabel,usernameLabel)
self.view.backgroundColor = UIColor.white
NextButton.addTarget(self, action: #selector(self.next(sender:)), for: .touchUpInside)
backButton.addTarget(self, action: #selector(self.back(sender:)), for: .touchUpInside)

}
override func viewWillLayoutSubviews() {
setupConstraints()
profilePic.sd_setImage(with: URL(string: profImgURl), placeholderImage: #imageLiteral(resourceName: “prof-ph”), options: SDWebImageOptions.highPriority, completed: nil)
profilePic.sd_setShowActivityIndicatorView(true)
profilePic.sd_setIndicatorStyle(UIActivityIndicatorViewStyle.white)
}

override func viewDidAppear(_ animated: Bool) {
NextButton.layer.cornerRadius = NextButton.frame.height/2
profilePic.layer.cornerRadius = max(profilePic.frame.height,profilePic.frame.width)/2
self.view.bringSubview(toFront: backButton)

}

func setupConstraints(){
backButton ;-
Left(20),
Top(40),
Size(30)

titleLabel ;-
Left().to(view,.left),
Top().to(backButton,.top),
Right().to(view,.right),
Height(25)

descriptionLabel ;-
Top().to(titleLabel,.bottom),
Left(50),
Right(50),
Height(descriptionLabel.intrinsicContentSize.height)

profilePic ;-
CenterX(),
Top(20).to(descriptionLabel,.bottom),
Size(self.view.frame.width/2)

usernameLabel ;-
CenterX(),
Top(50).to(profilePic,.bottom)

NextButton ;-
Bottom(40).to(view),
Left(65),
Right(65),
CenterX(),
Height(50)

}

@objc func back(sender:Any){
print(“back”)
dismiss(animated: true, completion: nil)
}
@objc func next(sender:Any){
let vc = MessageViewController()
vc.UID = self.UID
vc.profImgUrl = self.profImgURl
vc.delegate = self
vc.usernameLabel.text = usernameLabel.text
present(vc, animated: true, completion: nil)
}

func setProfile(name: String, image: UIImage) {
usernameLabel.text = name
profilePic.image = image
}

}
import UIKit
import AVFoundation
import EasyPeasy
import FirebaseDatabase
class MainCamDetectionViewController: UIViewController,NewFaceDelegate {
var facecount = 0
var captureSession:AVCaptureSession?
var videoPreviewayer:AVCaptureVideoPreviewLayer?
var photoOutput = AVCapturePhotoOutput()
// var output: AVCaptureStillImageOutput!
let metaDataOutput = AVCaptureMetadataOutput()
var outputImages:UIImage =
var uploaded = false
var vCount = 0
var faceMode:faceCaptureMode = .none
var currentCameraPosition = cameraMode.front
let box = UIView()
var isfaceSquareActive = true
// var isImageCapturedForRecognition = false
var isRecognitionMode = false
var isDetectionMode = false
var inputDevice:AVCaptureDeviceInput?
let camerabutton = cameraButton.init(frame: CGRect.zero)
var cameraPreview = UIView()

var alert = FPAlert(frame: CGRect.zero)
let rotateCameraButton = cameraRotateButton(frame: CGRect.zero)
var usingFrontCamera = false
let pinAlert:UIImageView = {
let image = UIImageView()
image.image = #imageLiteral(resourceName: “pinAlert”)
return image
}()

var capMode:captureMode = captureMode.recognition

var isPicCaptured = false

var isDetectionPicCaptured = false

var isFirstAlertShown = false

var ref:DatabaseReference?

override func viewDidLoad() {
ref = Database.database().reference().child(“users”)
set(mode: .recognition)
self.view.backgroundColor = UIColor.black
self.view.addSubviews(views: cameraPreview,box,camerabutton,alert,rotateCameraButton,pinAlert)
camerabutton.button.addTarget(self, action: #selector(cameraButtonClicked(sender:)), for: .touchUpInside)
rotateCameraButton.button.addTarget(self, action: #selector(self.changeCamera(sender:)), for: .touchUpInside)
// alert.setAlertMessage(As: “check”)

}
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
setupConstraints()
}

override func viewWillAppear(_ animated: Bool) {
hideAlert()
setupConstraints()
AVCaptureDevice.requestAccess(for: AVMediaType.video) { response in
if response {
self.loadCamera()
}
}
}

override func viewDidAppear(_ animated: Bool) {

camerabutton.layer.cornerRadius = camerabutton.frame.height/2
rotateCameraButton.layer.cornerRadius = rotateCameraButton.frame.height/2
alert.layer.cornerRadius = 10
if capMode == .detection {
self.makeAlert(as: FPValues.alertMessages.first)
}

}

func setupConstraints(){
cameraPreview ;- Edges()

camerabutton ;-
CenterX(),
Bottom(50),
Size(80)

alert ;-
Left(20),
Right(10),
Top(10).to(topLayoutGuide),
Height(80)

rotateCameraButton ;-
CenterY().to(camerabutton,ReferenceAttribute.centerY),
Size(*0.5).like(camerabutton),
Left(15).to(camerabutton,.right)

pinAlert ;-
// Left(40),
// Right(40),
CenterX().to(camerabutton,ReferenceAttribute.centerX),
Bottom().to(camerabutton,.top),
Height(70),
Width(self.view.frame.width – 50)

}

func captureFace(){
facecount += 1
capturePhoto()
}

func capturePhoto(){
let settings = AVCapturePhotoSettings()
let previewPixelType = settings.__availablePreviewPhotoPixelFormatTypes.first
let previewFormat = kCVPixelBufferPixelFormatTypeKey as String: previewPixelType,
kCVPixelBufferWidthKey as String: 160,
kCVPixelBufferHeightKey as String: 160,

settings.previewPhotoFormat = previewFormat
settings.isHighResolutionPhotoEnabled = false
photoOutput.capturePhoto(with: settings, delegate: self )
}

func getFrontCamera() -; AVCaptureDevice?{
return AVCaptureDevice.default(.builtInWideAngleCamera, for:AVMediaType.video, position: AVCaptureDevice.Position.front)!
}

func getBackCamera() -; AVCaptureDevice{
return AVCaptureDevice.default(.builtInWideAngleCamera, for:AVMediaType.video, position: AVCaptureDevice.Position.back)!
}
func loadCamera() {
var isFirst = false

DispatchQueue.global().async {
if(self.captureSession == nil){
isFirst = true
self.captureSession = AVCaptureSession()
self.captureSession!.sessionPreset = AVCaptureSession.Preset.iFrame960x540
}
var error: NSError?
var input: AVCaptureDeviceInput!

let currentCaptureDevice = (self.usingFrontCamera ? self.getFrontCamera() : self.getBackCamera())

do {
input = try AVCaptureDeviceInput(device: currentCaptureDevice!)
} catch let error1 as NSError {
error = error1
input = nil
print(error!.localizedDescription)
}

for i : AVCaptureDeviceInput in (self.captureSession?.inputs as! AVCaptureDeviceInput){
self.captureSession?.removeInput(i)
}

for i : AVCaptureOutput in (self.captureSession?.outputs as! AVCaptureOutput){
self.captureSession?.removeOutput(i)
}
if error == nil ;; self.captureSession!.canAddInput(input) {
self.captureSession!.addInput(input)
}
self.metaDataOutput.connections.first?.videoOrientation = .portrait

if (self.captureSession?.canAddOutput(self.metaDataOutput))! {
self.captureSession?.addOutput(self.metaDataOutput)
self.metaDataOutput.setMetadataObjectsDelegate(self , queue: DispatchQueue.main)
self.metaDataOutput.metadataObjectTypes = AVMetadataObject.ObjectType.face
}else {
print(“Error: Couldn’t add meta data output”)

}

if (self.captureSession?.canAddOutput(self.photoOutput))! {
self.captureSession?.addOutput(self.photoOutput)
}else {
print(“Error: Couldn’t add meta data output”)
return
}

DispatchQueue.main.async {
if isFirst { //run this code only one time
self.videoPreviewayer = AVCaptureVideoPreviewLayer(session: self.captureSession!)
self.videoPreviewayer!.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.videoPreviewayer!.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
self.videoPreviewayer?.frame = self.cameraPreview.layer.bounds
self.cameraPreview.layer.sublayers?.forEach { $0.removeFromSuperlayer() }
self.cameraPreview.layer.addSublayer(self.videoPreviewayer!)
self.captureSession!.startRunning()
}
}
}

}

@objc func changeCamera(sender:Any){
let blurEffect = UIBlurEffect(style: UIBlurEffectStyle.dark)
let blurEffectView = UIVisualEffectView(effect: blurEffect)
blurEffectView.frame = view.bounds
blurEffectView.autoresizingMask = .flexibleWidth, .flexibleHeight
view.addSubview(blurEffectView)
view.bringSubview(toFront: camerabutton)
self.usingFrontCamera = !self.usingFrontCamera
self.loadCamera()
self.box.frame.origin.x = -1000 //just to hide the box off screen
DispatchQueue.main.async {
Timer.scheduledTimer(withTimeInterval: 2, repeats: false, block: { (timer) in
UIView.animate(withDuration: 0.5, animations: {
blurEffectView.alpha = 0
}, completion: { (finished) in
blurEffectView.removeFromSuperview()
})

})
}
}

@objc func cameraButtonClicked(sender:Any){
if Reachability.isConnectedToNetwork() {
isRecognitionMode = true
if capMode == .recognition {

isDetectionMode = false
return
}
isDetectionMode = true
isRecognitionMode = false
hideAlert()
}else{
let alert = UIAlertController(title: “Internet connection not available ??”, message: “Please enable internet connection to continue”, preferredStyle: .alert)
let okaction = UIAlertAction(title: “ok”, style: .default, handler: { (_) in
alert.dismiss(animated: true, completion: nil)
})
alert.addAction(okaction)
present(alert, animated: true, completion: nil)
}

}

func set(mode: captureMode) {
DispatchQueue.main.async {
self.capMode = mode
self.outputImages =
if self.capMode == .recognition {
self.pinAlert.isHidden = true
self.camerabutton.setupDefaultMode()

}else{
self.pinAlert.isHidden = false
self.camerabutton.setupOkMode()
self.isDetectionPicCaptured = false
}

}
}

func makeAlert(as message:String){
alert.setAlertMessage(As: message)
UIView.animate(withDuration: 1) {
self.alert.transform = CGAffineTransform.identity
}
}

func hideAlert(){
UIView.animate(withDuration: 0.4) {
self.alert.transform = CGAffineTransform(translationX: 0, y: -150)
}
}
}

A2 –SCREENSHOTS
USER INTERFACE

FIG 6.1 REGISTRATION / FACE SETUP PROCESS

FIG 6.2 NEW FACE REGISTRATION GUIDE

FIG 6.3 USER REGISTRATION

FIG 6.4 CONFIRM FACE MODULE

FIG 6.5 MESSAGING MODULE

FIG 6.6 LOGIN/AUTHENTICATION MODULE

x

Hi!
I'm Antonia!

Would you like to get a custom essay? How about receiving a customized one?

Check it out