onsdag 24. august 2016

Laptop for data science

What are the laptops which are most suited for data scientists and analysts?

As we deal with heavy computations and also need to generate visualizations, something which can take the toll of it, would be recommended.

Would be preferred if it can help in handling Big Data analytics too.

Even though the analytics is done in the Map Reduce framework (or distributed computing), yet the computations are heavy and time taking and also slows down the laptop in most cases.

So, a laptop with features and OS which is most suited to handle such things gracefully is recommended.

[Price not an issue]

As I am pretty much in the same situation, here are what I look for:

SSD: since you'll likely perform many I/O on large data sets. 1 TB is my bottom line.
RAM: since it's often more convenient and much faster to keep data sets (or part of it) in memory. 16 GB is really bottom line.
GPU: Nvidia is sometimes preferable over AMD as it tends to be more supported (e.g. for neural network libraries). I had to get a MBP M2014 instead of M2015 because the latter had AMD while the former has Nvidia, and I need to use Theano.
OS: Linux tend to have more libraries (but since it doesn't have any decent speech engine software I personally use Microsoft Windows, using Linux in VM or in server).
CPU: hasn't evolved much over the last few years... some i7 3rd or 4th generation is standard.

As it's often cheaper to add SSD and RAM oneself, I tend upgrade mid-spec laptops.

If price isn't an issue, you can have a look at those overpriced Alienwares. For people who are more budget conscious, just check to what extend the laptop is upgradable (e.g. max RAM + number of SSD slots). In the US, I like Xotic PC as the max specs are clearly defined.

torsdag 18. august 2016

My life in Norway: Pursuing the dream

Born and raised in Macedonia, spent 4-5 years in Kosova and then migrated first time in Norway. My family was one of the few interested in science, specially in math, where my father was a math professor and most of my uncles studied math or engineering. I inherited the love to science and math, continued developing my self focused in math by becoming one the best in local, national and international competition of both math and physics (kind of applied mathematics).
Studied computer technology at University of Prishtina and 3 year in row won the University scholarship.
Studied with International professors from Concordia University; Vienna Institute of Technology and Institute Jean Lui Vives.

Even physically in Kosova, my dream was just to move to a more prospered countries to pursue my dream of being a great scientist. I have heard of UK, US and the big american dream, but never thought of Norway....

I moved in Norway some years ago and then I come back May 2010, pursuing my dream for a better career. I never thought that this will the time when the Revolution of my life started. I will never forget the time when I was sitting home and got a call that was actually a job opportunity to work in Norway, to work for one the best companies in the World, Nordic Choice Hotels. I answered with BIG YES and came to the first interview. It was all by the plan, the first interview was successful. Waited in Oslo for a couple of days, where I got invitation for the second round which was decisive. One day after that, I got the call of my career, saying the your job opportunity is now a job offer. Without hesitating I said YES and that was the biggest "yes" of my life, because what happened after proved this conclusion. Still not understanding in what wonderful world I was stepping in.

After signing the contract and some official paper work I started to work in June/July. I was thrilling to start with my new company and bring the successful project of Business Intelligence into live.I had time read and understand the business concept and strategy of Nordic Choice Hotels, so I was ready to dive in directly to the solution.

One of the biggest highlights of my career here is meeting the owner of Nordic Choice Hotels and bunch of other business around Norway, Mr. Petter A. Stordalen. His ability to give energy at any time in the company was special. You could feel his absence or his presence without seeing him at all.

Me and Petter Stordalen at Garden Party

During the time being at Choice, I had the opportunity to meet other important people as well, so I learned a lot from them.
Me and my department made great efforts on creating the best BI solution for the company in a given condition and situation. So we excelled by creating this solution presented in the video: 

A Visionary Choice - Nordic Choice Hotels Business Intelligence vision from Platon Deloitte on Vimeo.

But things came to an end, sometimes without our willing, so in April I had to change my job and pursue my professional dream at Sopra Steria AS

Sopra Steria is trusted by leading private and public organisations to deliver successful transformation programmes that address their most complex and critical business challenges. Combining high quality and performance services, added-value and innovation, Sopra Steria enables its clients to make the best use of information technology.
We have a strong local presence across the UK with around 6,700 people in locations in England, Scotland, Wales and Northern Ireland. Sopra Steria supports businesses in the full technology lifecycle - from the definition of strategies through to their implementation. We add value through our expertise in major projects, knowledge of our clients' specific businesses, expertise in technologies and a broad European presence.
Sopra Steria Group, a European leader of digital transformation, was established in September 2014 as a merger of Sopra with Steria. See the timeline for both companies showing the milestones achieved over nearly 50 years before becoming a single entity.

Brief Professional Summary
I am an IT professional with focus on Business and Data Analytic, prefer to call myself Data Scientist. I have in depth experience using and implementing business intelligence/data analysis tools with greatest strength in the Microsoft SQL Server / Business Intelligence Studio SSIS, SSAS, SSRS. I have designed, developed, tested, debugged, and documented Analysis and Reporting processes for enterprise wide data warehouse implementations using the SQL Server / BI Suite. I also have designed/modeled OLAP cubes using SSAS and developed them using MS SQL BIDS SSAS and MDX. Served as an implementation team member where I translated source mapping documents and reporting requirements into dimensional data models. Strong ability to work closely with business and technical teams to understand, document, design and code SSAS, MDX, DMX, DAX abd ETL processes, along with the ability to effectively interact with all levels of an organization. Additional BI tool experience includes ProClarity, Microsoft Performance Point, MS Office Excel and MS SharePoint.

Professional highlights as DATA SCIENTIST:

1. Worked for Capgemini Norway AS

2. Worked for Nordic Choice Hotels AS

3. Working for SopraSteria AS

4. Working on a StartUp ELA AS

Academic Honors:

MIT Honor Code Certificate: CS and Programming, BigData (04.06.2013)

Princeton University Honor Code Certificate:  Analytic Combinatorics (10.07.2013)

Stanford University Honor Code Certificate: Mathematical Thinking,, Cryptography (06.05.2013)

The University of California At Berkeley Honor Code Certificate: Descriptive Statistics

IIT University Honor Code Certificate: Web Intelligence and Big Data (02.06.2013)

Wesleyan UniversityPassion Driven Statistics (20.05.2013)

Google Analytics Certified

Career Highlights:

1. Over Nine years of experience in the field of Information Technology, System Analysis and Design, Data
    warehousing, Business Intelligence and Data Science in general

2. Experienced in implementing / managing large scale complex projects involving multiple stakeholders and
    leading and directing multiple project teams

3. Track record of delivering customer focused, well planned, quality products on time, while adapting to
    shifting and conflicting demands and priorities.

4. Experience in Data warehouse / Business Intelligence developments, implementation and operation setup

5. Expertise in Data Modeling, Data Analytics and Predictive Analytics SSAS, MDX and DMX

6. Strong Knowledge in Data warehouse, Data Extraction, Transformation, and Loading ETL

7. Excellent track record in developing and maintaining enterprise wide web based report systems and portals in Finance, Enterprise wide solutions and BI and Strategy Systems

8. Best new employee for 2011 of Nordic Choice Hotels AS


1. First place in regional math competitions in 2 years in a row

2. First place in Physics competition in a Balkaniada (Balkan Olympics in Theoretical Physics)

3. First place in fast math competition in International Kangourou Competition

4. Gold Medalist in Microsoft Virtual Academy (Microsoft Business Intelligence)

5. 2 times finalist as the best Business Intelligence solution:

Research Work:

1. Riccati Differential Equation solution (published in printed version Research Journal)

3. Personal Finance Intelligence; published in IJSER 8 August 2012 edition

mandag 15. august 2016

The Data Science Process 1/3

Congratulations! You’ve just been hired for your first job as a data scientist at Hotshot Inc., a startup in San Francisco that is the toast of Silicon Valley. It’s your first day at work. You’re excited to go and crunch some data and wow everyone around you with the insights you discover. But where do you start?
Over the (deliciously catered) lunch, you run into the VP of Sales at Hotshot Inc., introduce yourself and ask her, “What kinds of data challenges do you think I should be working on?”
The VP of Sales thinks carefully. You’re on the edge of your seat, waiting for her answer, the answer that will tell you exactly how you’re going to have this massive impact on the company of your dreams.
And she says, “Can you help us optimize our sales funnel and improve our conversion rates?”
The first thought that comes to your mind is: What? Is that a data science problem? You didn’t even mention the word ‘data’. What do I need to analyze? What does this mean?
Fortunately, your mentor data scientists have warned you already: this initial ambiguity is a regular situation that data scientists in industry encounter. All you have to do is systematically apply the data science process to figure out exactly what you need to do.
The data science process: a quick outline
When a non-technical supervisor asks you to solve a data problem, the description of your task can be quite ambiguous at first. It is up to you, as the data scientist, to translate the task into a concrete problem, figure out how to solve it and present the solution back to all of your stakeholders. We call the steps involved in this workflow the “Data Science Process.” This process involves several important steps:
  • Frame the problem: Who is your client? What exactly is the client asking you to solve? How can you translate their ambiguous request into a concrete, well-defined problem?
  • Collect the raw data needed to solve the problem: Is this data already available? If so, what parts of the data are useful? If not, what more data do you need? What kind of resources (time, money, infrastructure) would it take to collect this data in a usable form?
  • Process the data (data wrangling): Real, raw data is rarely usable out of the box. There are errors in data collection, corrupt records, missing values and many other challenges you will have to manage. You will first need to clean the data to convert it to a form that you can further analyze.
  • Explore the data: Once you have cleaned the data, you have to understand the information contained within at a high level. What kinds of obvious trends or correlations do you see in the data? What are the high-level characteristics and are any of them more significant than others?
  • Perform in-depth analysis (machine learning, statistical models, algorithms): This step is usually the meat of your project,where you apply all the cutting-edge machinery of data analysis to unearth high-value insights and predictions.
  • Communicate results of the analysis: All the analysis and technical results that you come up with are of little value unless you can explain to your stakeholders what they mean, in a way that’s comprehensible and compelling. Data storytelling is a critical and underrated skill that you will build and use here.
So how can you help the VP of Sales at Hotshot Inc.? In the next few emails, we will walk you through each step in the data science process, showing you how it plays out in practice. Stay tuned!

torsdag 11. august 2016

Last day(s) to participate for a chance to win a FREE space at our Data Science Boot Camp

A free place on our pioneering Data Science Boot Camp training programme is being offered by specialist recruitment agency, MBN Solutions. Places on the much-anticipated course, aimed at upskilling those with raw analytical grounding into bona fide data scientists, are worth £7,000. The average cost of recruiting a data science specialist is £15,000.
The Data Lab has partnered with New York’s globally renowned, The Data Incubator (whose courses are reputedly harder to get into than Harvard), to develop the three-week data Boot Camp as part of a drive to plug the nation’s data skills gap. It is aimed at helping to unlock the economic potential of data to Scotland, estimated to be worth £17 billion* in Scotland alone.
To apply for the MBN Solutions sponsored place, potential participants need to submit a video explaining how they would use the data science Boot Camp training in their current organisations. The video should be maximum two minutes and include:
  • Your current role and experience
  • Why you want to take part in the course
  • Why you believe improving your skills in data science is important
  • How you hope to use the skills you will learn in the course to improve your work 
  • What impact do you expect to achieve for your organisation as a result of your skills
The video must be uploaded to YouTube, the link to the video sent to by 12th August.
Michael Young, CEO of MBN Solutions, said: “With the average cost of recruiting a data scientist £15,000, the Boot Camp presents an incredible opportunity to upskill current staff and invest in your company’s data science offering.
“The Data Incubator is recognised as the go-to experts in the data training sector globally and, by sponsoring a place for a budding data scientist, we are helping to enhance Scotland’s pipeline of data science talent.
“Every day we see fantastic, innovative data science projects going on in our client’s organisations, Scotland is leading the way in data science in the UK and The Data Lab are really driving the data agenda forward. Some countries are only just waking up to the potential of data. This course marks a really exciting time for Scotland and The Data Lab and we at MBN Solutions are thrilled to be a part of it.”
Brian Hills, Head of Data at The Data Lab, said: “We’re very pleased to have MBN Solutions sponsor a place on the Boot Camp which will take us one step closer to exploiting the data opportunity in great demand and short supply.
“It is going to be an incredible three weeks with attendees gaining a highly sought after data science skillset and learnings from world-leaders in data science.
“It’s crucial Scotland remains ahead of the curve in data science. By investing in our pipeline of talent and learning from international experts, we are securing our future and taking critical steps toward exploiting the data potential available here in Scotland.”
The pioneering training initiative will allow Scottish businesses to fast track potential returns by using data analysis to drive insight and decision-making across industry. There are only a few places left for the Boot Camp which will take place in September in Edinburgh. It will focus on developing practical application skills such as advanced python, machine learning and data visualisation in a collaborative environment.
For further information on the Boot Camp, how to apply, and how to enter the competition, please check out our Boot camp pagedownload our brochure or email

About The Data Incubator

The Data Incubator is data science education company based in NYC, DC, and SF with both corporate training and hiring offerings. They leverage real world business cases to offer customized, in-house training solutions in data and analytics. They also offer partners the opportunity to hire from their 8 week fellowship training PhDs to become data scientists. The fellowship selects 2% of its 2000+ quarterly applicants and is free for fellows. Hiring companies (including EBay, Capital One, AIG, and Genentech) pay a recruiting fee only if they successfully hire. You can read more about The Data Incubator on Harvard Business Review, VentureBeat, or The Next Web, or read about their alumni at Palantir or the NYTimes.

About MBN Solutions

In a field saturated by many lookalike recruitment consultancies, MBN is a truly different business. Priding ourselves on values of deep, real subject matter knowledge in the Data Science, Big Data, Analytics and Technology space, a passionate approach to developing our own consultants and a strategy placing our clients at the heart of our business, MBN are a true market defining ‘People Solutions’ business.

Another approach to Personal Finance

Re-Inventing Personal Finance using Data Science

Existing software and new approach

Usually existing Personal Finance applications are boring, because they are all dependent of manually input of your data, in right segment, the right amount, just boooring. In addition, you can count on manual input fails together with the impossibility of live update your financial status, to make it even worst experience. These and many other reasons make the existing Personal Finance applications nearly useless.

To avoid manual input of data into your application, you need a live feed from your transaction data (credit card usage, bank payments etc...) and only manual input for cash amounts. However, cash is very small problem, as we tend to avoid it as much as possible and instead we mostly buy with electrons.

Most of the banks offer to their customers a digital bank account where all the transactions are visible and that can the best source to avoid manual input. So, why we do not ask for built-inn application that will serve as Personal Financial app with even more possibilities to serve you.

The solution

This application can save lives, can make you better at your personal finance, can avoid financial crisis and help banks get better understanding of you as customer. It is not only you as a person that benefits, but the entire society and even the bank itself. Bank can have much better credit scoring for their customers and can avoid risky loans, risky bank interests for a particular customer etc…

To build (in) this app we need to consider many things and specially the approach that Business Intelligence solutions can serve to us, but keeping in mind security and impersonation as we work with very critical data.

Therefore, I am delighted to represent you PFI that stands for Personal Finance Intelligence, which represents a non-usual approach to Personal Finance solutions existing in market today.

Personal Finance Intelligence (PFI) aims to be a built-in Business Intelligence application inside your digital bank service to serve you as personal finance and budget planner assistant.

Inspired by a Norwegian TV Show “Luksusfellen”, this Business Intelligence app approach may be a solution for all these who fail to maintain well their own economy, and for those who want to perform their economy, save more and last but not least the bank itself.
The fundament of this concept is a Customer Analytics Data Center that would have the power process data on the transaction level. The duty of the data center will be to collect, structure, clean, model and present the data to the bank customers as usual Personal Finance application do, but in addition, data will be updated automatically. This is the reporting (presentation) layer of your financial status (picture), but this application can offer you much more and here is why!

In addition to a standard PF application, this solution include also bench-marking against an standardized customer (Ola Norman) that represent the data set of Min, Max or Average segmented by customer's choice and properties, for example: How I stand against customers from 28-35 years old, from east Oslo, in buying food and beverage this month?
To have more control and plan well your own economy, targeting will be an integrated service inside application where users (bank customers) can put their targets (manually) for costs or income or can leave the application algorithm fill that with projection based on each customer historical data. You can activate a flagging service, so you are warned when approaching certain limits in your expenses and run algorithm to optimize the use of remaining budget and you do not get broke.

Big Data can help make it even better

Last technological findings can make this approach even more interesting and meaningful. Imagine what Big Data and Data Science can do by adding external data for customers that will allow opening of their social media accounts to the bank application. Social media behavior is very important and can bring very important segmentation inside customer categorizations.

Machine learning algorithms can help make the decision and budgeting much better based on other decisions and budgeting techniques.


As I mentioned before, data impersonation and security are a showstopper as we are going to work with bench marking data sets that implies set of other customer’s data. Here we can have a potential data leak from one customer to another, so our system must ensure consistency in both sides and the bank system has everything under control. Transaction details of customers can make banks expose their ‘hidden’ costs and fees. Many banks will hesitate to offer this service to their customers just because of this; in other side customers have legitimate right to have such information.


Beneficiary to this approach are not only the customers and world economy, but also the bank itself in cases when they want to perform customer evaluation (credit check) and behave reaction to certain financial statuses. Today’s Credit scoring system lacks on better decisions because they miss important data.

I am on the way to build business concept and the technical architecture of this approach. My team and I would love to share this approach on details, including implementation, if any company, association or bank in the world is interested to offer this service to their customers. This can be the best preventive for World financial system to stay sustainable and not crush as it did before.

© Copyright All rights reserved to Besim Ismaili 03051982

Oslo, January 2015

søndag 7. august 2016

The 7 Steps of a Data Project


Well, building your first data project is actually not that hard. And yes, Dataiku DSS helps, but what will really helps you is understanding the data science process. Becoming data driven is about this: knowing the basic steps and following them to go from raw data to building a machine learning model.
The steps to complete a data project have been conceptualized a while ago as the KDD process (forKnowledge Discovery in Databases), and made popular with lots of vintage looking graphs like this one.
This is our take on the steps of a data project in this awesome age of big data!


Business goal in data project
Understanding the business is the key to assuring the success of your data project. To motivate the different actors necessary to getting your project from design to production, your project must be the answer to a clear business need. So before you even think about the data, go out and talk to the people who could need to make their processes or their business better with data. Then sit down and define a timeline and concrete indicators to measure. I know, processes and politics seem boring, but in the end, they turn out to be quite useful!

If you’re working on a personal project, playing around with a dataset or an API, this may seem irrelevant. It’s not. Just downloading a cool open data set is not enough. I can’t tell you how many cool datasets I downloaded and never did anything with… So settle on a question to answer, or a product to build!


Once you’ve gotten your goal figured out, it’s time to start looking for your data. Mixing and merging data from as many data sources as possible is what makes a data project great, so look as far as possible.

Here are a few ways to get yourself some data:
  • Connect to a database: ask your data and IT teams for the data that’s available, or open your private database up, and start digging through it, and understanding what information your company has been collecting.
  • Use APIs: think of the APIs to all the tools your company’s been using, and the data these guys have been collecting. You have to work on getting these all set up so you can use those email open/click stats, the information your sales team put in Pipedrive or Salesforce, the support ticket somebody submitted, etc. If you’re not an expert coder, plugins in DSS give you lots of possibilities to bring in external data!
  • Look for open data: the Internet is full of datasets to enrich what you have with extra information; census data will help you add the average revenue for the district where your user lives, or open street maps can show you how many coffee shops are on his street. A lot of countries have open data platforms (like data gov in the US). If you’re working on a fun project outside of work, these open data sets are also an incredible resource! Check out kaggle, or this github with lots of datasets for example
  • Use more APIs: another great way to start a personal project is to make it super personal by working on your own data! You can connect to your social media tools, like twitter, or facebook, to analyze your followers and friends. It’s extremely easy to set up these connections with tools like ifttt. For example, I have a bunch of recipes that collect the music I listen to, the places I visit, my steps and the kilometers I run, the contacts I add, etc. And this can be useful for businesses as well! You can analyze very interesting trends on twitter, or even monitor the competition.


(AKA the dreaded preprocessing step that typically takes up 80% of the time dedicated to a data project)
Once you’ve gotten your data, it’s time to get to work on it! Start digging to see what you’ve got and how you can link everything together to answer your original goal. Start taking notes on your first analyses, and ask questions to business people, or the IT guys, to understand what all your variables mean! Because not everyone will get that c06xx is a product category referring to something awesome.

Once you understand your data, it’s time to clean it! You’ve probably noticed that even though you have a country feature for instance, you’ve got different spellings, or even missing data. It’s time to look at every one of your columns to make sure your data is homogeneous and clean.
Warning! This is probably the longest, most annoying step of your data project. Data scientists report data cleaning is about 80% of the time spent on a project. So it’s going to suck a little bit. Luckily, tools like Dataiku DSS can make this much faster!


enriching in data project
Now that you’ve got clean data, it’s time to manipulate it to get the most value out of it. This is the time to join all your different sources, and group logs, to get your data down to the essential features.

You’ll then start manipulating the data to extract lots of valuable features. For example, getting a country and even a town out of a visitor’s IP address. Extracting time of day, or week of year from your dates to get something more meaningful.
The possibilities are pretty much endless, and you’ll get a pretty good idea by scrolling through Dataiku DSS’s processors in the Lab of the operations you can execute.


building insights and graphs in data project
You now have a nice dataset (or maybe several), so this is a good time to start exploring it by building graphs. When you’re dealing with large volumes of data, they’re the best way to explore and communicate your findings.

You’ll find lots of tools available that make this step fun to prepare and to receive. The tricky part is always to be able to dig into your graphs to answer any question somebody would have about an insight. That’s when the data preparation comes in handy: you’re the guy who did the dirty work so you know the data like the palm of your hand!
If this is the final step of your project, it’s important to use APIs and plugins so you can push those insights to where your end users want to have them. So get integrated with their tools!
Your graphs don’t have to be the end of your project though. They’re a way to uncover more trends that you want to explain. They’re also a way to develop more interesting features. For example, by putting your data points on a map you could perhaps notice that specific geographic zones are more telling than specific countries or cities.


building insights and graphs in data project

By working with clustering algorithms (aka unsupervised), you can build models to uncover trends in the data that were not distinguishable in graphs and stats. These create groups of similar events (or clusters) and more or less explicitly express what feature is decisive in these results. Tools like Dataiku DSS help beginners run basic open source algorithms easily in clickable interfaces.
More advanced data scientists can then go even further and predict future trends with supervised algorithms. By analyzing past data, they find features that have impacted past trends, and use them to build predictions. More than just gaining knowledge, this final step can lead to building whole new products and processes. To get these in production though, you’ll need the intervention of data scientists and engineers, but it’s important to understand the process so all the parties involved (business users and analysts as well), will be able to understand what comes out in the end.


building insights and graphs in data project
The main goal in any business project is to prove it’s effectiveness as fast as possible to justify, well, your job. Data projects are the same. By gaining time on data cleaning and enriching, you can go to the end of the project fast and get your first results. These first insights will be a great start to uncover more necessary cleaning, to develop more features in order to continuously improve results and model outputs.

Now that you’ve got the skills, get started right now by building projects in Dataiku DSS!

lørdag 6. august 2016

Training in Critical Thinking & Descriptive Intelligence Analysis

Level 1 Intelligence Analyst Certification

About This Course

Course Description

The views expressed in this course are the instructor's alone and do not reflect the official position of the U.S. Government, the Intelligence Community, or the Department of Defense.

Although anyone can claim the title of “intelligence analyst,” there are currently few commonly understood, standardized certifications available to confirm analytic skill and proficiency. Some may argue that each analytic assessment should be judged on its content and not on the certification or reputation of the author. However, an analytic product can often read well even though its analytic underpinnings are flawed. Also, it would be beneficial to have some objective measure of an analyst’s skill before selecting him for a task, rather than to discover afterwards that the analyst was unable to meet the task. Having addressed why certifications are needed and assuming certifications would provide a worthwhile benefit, the discussion then turns to how and in what areas should one attain certification. Through an analysis of the concept of analysis, the author proposes that three basic divisions should be created to train and certify one as either a descriptive, explanative, or predictive analyst. This course provides level 1 certification as a descriptive intelligence analyst.

What are the requirements?
No prior preparation is necessary; however, a strong academic background, understanding of the scientific method, and an open mind will help the student perform well in this course.

What am I going to get from this course?

Apply critical thinking skills throughout the analytic process
Identify and mitigate biases to reveal unstated assumptions
Refine and clarify intelligence questions
Conduct research to identify existing data and gather new evidence
Select and apply appropriate analytic techniques
Reevaluate and revalidate previous analytic conclusions.Full details

What is the target audience?

This course is intended for the new intelligence analyst who has little to no prior experience. This course will provide the basic analytic skills necessary to produce basic, logically sound, descriptive intelligence analysis.
More experienced intelligence analysts will also find this course provides great "back to basics" refresher training.

Full details


Section 1: Introduction
Lecture 1
Welcome and overview 05:02
Lecture 2
The need for intelligence analyst certifications Article
Lecture 3
Course administration and supplemental material 02:45
Section 2: Critical Thinking and Avoiding Bias
Lecture 4
Thinking about thinking I; critical thinking 16:31
Quiz 1
Critical thinking quiz 5 questions
Lecture 5
Thinking about thinking II; logical, probable, and plausible reasoning 06:42
Lecture 6
Analytic pitfalls 14:19
Lecture 7
Insights into problem solving 15:34
Quiz 2
Section review quiz 5 questions
Section 3: Getting the Question Right
Lecture 8
Problem restatement 06:23
Quiz 3
Section review quiz 5 questions
Section 4: Intelligence Research and Collection
Lecture 9
Gathering the evidence 08:42
Lecture 10
Evaluating the evidence 08:40
Quiz 4
Section review quiz 5 questions
Section 5: Intelligence Analysis
Lecture 11
Selecting the right technique 03:03
Lecture 12
Realizing the power of analytics: arming the human mind Article
Lecture 13
Sorting, chronologies, and timelines 05:43
Lecture 14
The matrix 06:25
Lecture 15
Decision/event trees 07:57
Lecture 16
Link analysis 03:23
Lecture 17
Analysis of competing hypothesis (ACH) 16:27
Section 6: Conclusion
Lecture 18
Argument evaluation and reevaluation 10:30
Quiz 5
Final certification exam 25 questions
Lecture 19
Bonus Lecture Article