Wednesday, August 24, 2016

mengapa raspberry pi?

mengapa raspberry pi? mikrokonttroller mungil yang serba bisa
Bagi anda yang sudah terbiasa menggunakan mikrokontroller  untuk berbagai proyek elektronika, adalah sebuah keuntungan jika beralih ke Raspberry pi. sejak tahun 2006, telah dikembangkan komputer berukuran mungil dan relatif murah, dapat dioprek untuk keperluan pengendalian digital,, seperti robotik , remote control, sampai pada aplikasi web, dan dapat membangun komputer desktop yang murah meriah.
Raspberry pi dapat menjadi mikrokontroller sekaligus sebagai komputer.jika mikrokontroller terbatas pada penggunaan RAM dan tidak mampu mengaplikasikan grafis, maka raspberry PI hadir bukan hanya sebagai mikrokontroller,tetapi mengaplikasikan program berbasis grafis sepertil halnya aplikasi pada tablet.




Tuesday, August 23, 2016

Instalasi Raspberry pi

bagaimana cara menginstal raspberry pi?

mangga bisa di tonton saja video tutorial di bawah ini....selamat menyimak



Saturday, August 20, 2016

Memasukkan Data ke Database( Create)

bagaimana cara memasukkan data ke database melaui script php:
3 file yang dibuat :
  1. koneksi.php (untuk mengkoneksikan ke database yang dituju)
  2. form_input.php (bentuk form berisi textbox dan tombol simpan )
  3. simpan_data.php( ketika tombol kirim ditekan, maka data tersebut diperintahkan untuk masuk ke database yang dituju)
bagaimana cara menuliskan scriptnya ?
tonton video berikut



source code lengkap download di sini

Pengolahan database dengan PHP -dasar

Pada  sesi sebelumnya , kita sudah sampai pada pembahasan tentang bagaimana mengkoneksikan sebuah perintah PHP untuk mengeksekusi database.Pada bagian ini, kita lanjutkan pada pembahasan inti dari PHP.Kok bisa inti....??
Ya,,,,pada intinya , semua pemrograman PHP hanya memiliki 4 tujuan utama , yaitu....

  1. Membuat Data (Create)
  2. Menampilkan Data (Read)
  3. Mengedit atau Update data (Update)
  4. Menghapus data (Delete)
Sehingga 4 teknik atau tujuan dari script PHP ini dikenal sebagai CRUD (Create, Read,Update,Delete ) . jika CRUD ini sudah kita kuasai , maka ibarat busur dan anak panah sudah ditangan, tinggal mau ditembakkan kemana panahnya . Pokoke .....Setiap Master atau calon  Master PHP manapun, maka CRUD ini pasti sudah dikuasai dan  So.....jangan pelajari yang lain sebelum menguasai CRUD ini..

oke sekarang agan tinggal cek pembahasan selanjutnya di tutorial  Memasukkan Data ke Database (Create)


Koneksi dengan database -php-dasar

Setelah kita memiliki database dan tabelnya, langkah selanjutnya adalah bagaimanana cara mengkoneksikan atau menghubungkannya dengan file script php yang akan kita buat. berikut script untuk membuat koneksi PHP dengan database:
pertama kita buat  script php dengan nama "koneksi.php"....selanjutkan ketikan kode berikut



Script koneksi dengan database tersebut adalah script PHP untuk mengkoneksikan dengan database yang bernama "belajar" yang berada pada server "localhost" dengan user "root" dengan tidak menggukanak pasword.Demgam demikian semua perintah eksekusi terhadap database dapat dilakukan setelah script koneksi ke database berikut .

catatan :

  • script koneksi tersebut disimpan dalam file tersendiri untuk kemudian di panggil oleh script PHP atau dapat ditempatkan pada setiap permulaan memulai sebuah perintah terhadap eksekusi ke database
  • jika ingin menghubungkan  dengan database yang lain, hanya tinggal mengganti nama database yang akan dieksekusi oleh script PHP.yang diganti nama databsenya adalah pada baris $database = "belajar";


source code lengkap download di sini

Friday, August 19, 2016

ledberkedip-python uno

kasus
anda memiliki papan mikrokontroller arduino dan anda ingin membuat program yang berperan sebagai antarmka (interface) antara komputer dengan mikrokontroller. anda merasa bingung harus mulai dari mana?
solusi :
untuk menangani permasalah ini ,anda membutuhkan interpreter Python dan library tambahan yang dapat menghubungkan python dgn arduino. salah satunya adalah paket pyfirmata.Anda dapat memperoleh paket pyfirmata dari https://pypi.python.org/pypi/pyFirmata

daripada tambah pusing,, mending lihat video nya gan...




source code lengkap download di sini

face detection -python

bagaimana cara mendeteksi muka dan mata dengan python?

simak video tutorial berikut :



source code lengkap download di sini

Akses video dengan python

bagaiamana cara mengakses video wabcam dengan python ?

berikut adalah video tutorialnya ,...mangga ditonton wae....



source code :

import cv2.cv as cv
import time
cv.NamedWindow("camera", 1)
capture = cv.CaptureFromCAM(0)
while True:
    img = cv.QueryFrame(capture)
    cv.ShowImage("camera", img)
    if cv.WaitKey(10) == 27:
        break



source code lengkap download di sini

Thursday, August 18, 2016

facedetection menggunakan Camshift

ingin membuat sistem pengenalan wajah dengan menggunakn python ?

berikut adalah source code chamsift yang ditulis dengan menggunakan bahasa python
perlu diperhatikan kalau kode ini menggunakan library opencv dan numpy, 

#!C:\Python27\python

import cv2.cv as cv
import numpy

def is_rect_nonzero(r):
    (_,_,w,h) = r
    return (w > 0) and (h > 0)

class CamShiftDemo:

    def __init__(self):
        self.capture = cv.CaptureFromCAM(0)
        cv.NamedWindow( "CamShiftDemo", 1 )
        cv.NamedWindow( "Histogram", 1 )
        cv.SetMouseCallback( "CamShiftDemo", self.on_mouse)

        self.drag_start = None      # Set to (x,y) when mouse starts drag
        self.track_window = None    # Set to rect when the mouse drag finishes

        print( "Keys:\n"
            "    ESC - quit the program\n"
            "    b - switch to/from backprojection view\n"
            "To initialize tracking, drag across the object with the mouse\n" )

    def hue_histogram_as_image(self, hist):
        """ Returns a nice representation of a hue histogram """

        histimg_hsv = cv.CreateImage( (320,200), 8, 3)

        mybins = cv.CloneMatND(hist.bins)
        cv.Log(mybins, mybins)
        (_, hi, _, _) = cv.MinMaxLoc(mybins)
        cv.ConvertScale(mybins, mybins, 255. / hi)

        w,h = cv.GetSize(histimg_hsv)
        hdims = cv.GetDims(mybins)[0]
        for x in range(w):
            xh = (180 * x) / (w - 1)  # hue sweeps from 0-180 across the image
            val = int(mybins[int(hdims * x / w)] * h / 255)
            cv.Rectangle( histimg_hsv, (x, 0), (x, h-val), (xh,255,64), -1)
            cv.Rectangle( histimg_hsv, (x, h-val), (x, h), (xh,255,255), -1)

        histimg = cv.CreateImage( (320,200), 8, 3)
        cv.CvtColor(histimg_hsv, histimg, cv.CV_HSV2BGR)
        return histimg

    def on_mouse(self, event, x, y, flags, param):
        if event == cv.CV_EVENT_LBUTTONDOWN:
            self.drag_start = (x, y)
        if event == cv.CV_EVENT_LBUTTONUP:
            self.drag_start = None
            self.track_window = self.selection
        if self.drag_start:
            xmin = min(x, self.drag_start[0])
            ymin = min(y, self.drag_start[1])
            xmax = max(x, self.drag_start[0])
            ymax = max(y, self.drag_start[1])
            self.selection = (xmin, ymin, xmax - xmin, ymax - ymin)

    def run(self):
        hist = cv.CreateHist([180], cv.CV_HIST_ARRAY, [(0,180)], 1 )
        backproject_mode = False
        while True:
            frame = cv.QueryFrame( self.capture )

            # Convert to HSV and keep the hue
            hsv = cv.CreateImage(cv.GetSize(frame), 8, 3)
            cv.CvtColor(frame, hsv, cv.CV_BGR2HSV)
            self.hue = cv.CreateImage(cv.GetSize(frame), 8, 1)
            cv.Split(hsv, self.hue, None, None, None)

            # Compute back projection
            backproject = cv.CreateImage(cv.GetSize(frame), 8, 1)

            # Run the cam-shift
            cv.CalcArrBackProject( [self.hue], backproject, hist )
            if self.track_window and is_rect_nonzero(self.track_window):
                crit = ( cv.CV_TERMCRIT_EPS | cv.CV_TERMCRIT_ITER, 10, 1)
                (iters, (area, value, rect), track_box) = cv.CamShift(backproject, self.track_window, crit)
                self.track_window = rect

            # If mouse is pressed, highlight the current selected rectangle
            # and recompute the histogram

            if self.drag_start and is_rect_nonzero(self.selection):
                sub = cv.GetSubRect(frame, self.selection)
                save = cv.CloneMat(sub)
                cv.ConvertScale(frame, frame, 0.5)
                cv.Copy(save, sub)
                x,y,w,h = self.selection
                cv.Rectangle(frame, (x,y), (x+w,y+h), (255,255,255))

                sel = cv.GetSubRect(self.hue, self.selection )
                cv.CalcArrHist( [sel], hist, 0)
                (_, max_val, _, _) = cv.GetMinMaxHistValue( hist)
                if max_val != 0:
                    cv.ConvertScale(hist.bins, hist.bins, 255. / max_val)
            elif self.track_window and is_rect_nonzero(self.track_window):
                cv.EllipseBox( frame, track_box, cv.CV_RGB(255,0,0), 3, cv.CV_AA, 0 )

            if not backproject_mode:
                cv.ShowImage( "CamShiftDemo", frame )
            else:
                cv.ShowImage( "CamShiftDemo", backproject)
            cv.ShowImage( "Histogram", self.hue_histogram_as_image(hist))

            c = cv.WaitKey(7) % 0x100
            if c == 27:
                break
            elif c == ord("b"):
                backproject_mode = not backproject_mode

if __name__=="__main__":
    demo = CamShiftDemo()
    demo.run()
    cv.DestroyAllWindows()

source code lengkap download di sini

php-dasar

facedetection menggunakan Camshift

ingin membuat sistem pengenalan wajah dengan menggunakn python ?

berikut adalah source code chamsift yang ditulis dengan menggunakan bahasa python
perlu diperhatikan kalau kode ini menggunakan library opencv dan numpy, 

#!C:\Python27\python

import cv2.cv as cv
import numpy

def is_rect_nonzero(r):
    (_,_,w,h) = r
    return (w > 0) and (h > 0)

class CamShiftDemo:

    def __init__(self):
        self.capture = cv.CaptureFromCAM(0)
        cv.NamedWindow( "CamShiftDemo", 1 )
        cv.NamedWindow( "Histogram", 1 )
        cv.SetMouseCallback( "CamShiftDemo", self.on_mouse)

        self.drag_start = None      # Set to (x,y) when mouse starts drag
        self.track_window = None    # Set to rect when the mouse drag finishes

        print( "Keys:\n"
            "    ESC - quit the program\n"
            "    b - switch to/from backprojection view\n"
            "To initialize tracking, drag across the object with the mouse\n" )

    def hue_histogram_as_image(self, hist):
        """ Returns a nice representation of a hue histogram """

        histimg_hsv = cv.CreateImage( (320,200), 8, 3)

        mybins = cv.CloneMatND(hist.bins)
        cv.Log(mybins, mybins)
        (_, hi, _, _) = cv.MinMaxLoc(mybins)
        cv.ConvertScale(mybins, mybins, 255. / hi)

        w,h = cv.GetSize(histimg_hsv)
        hdims = cv.GetDims(mybins)[0]
        for x in range(w):
            xh = (180 * x) / (w - 1)  # hue sweeps from 0-180 across the image
            val = int(mybins[int(hdims * x / w)] * h / 255)
            cv.Rectangle( histimg_hsv, (x, 0), (x, h-val), (xh,255,64), -1)
            cv.Rectangle( histimg_hsv, (x, h-val), (x, h), (xh,255,255), -1)

        histimg = cv.CreateImage( (320,200), 8, 3)
        cv.CvtColor(histimg_hsv, histimg, cv.CV_HSV2BGR)
        return histimg

    def on_mouse(self, event, x, y, flags, param):
        if event == cv.CV_EVENT_LBUTTONDOWN:
            self.drag_start = (x, y)
        if event == cv.CV_EVENT_LBUTTONUP:
            self.drag_start = None
            self.track_window = self.selection
        if self.drag_start:
            xmin = min(x, self.drag_start[0])
            ymin = min(y, self.drag_start[1])
            xmax = max(x, self.drag_start[0])
            ymax = max(y, self.drag_start[1])
            self.selection = (xmin, ymin, xmax - xmin, ymax - ymin)

    def run(self):
        hist = cv.CreateHist([180], cv.CV_HIST_ARRAY, [(0,180)], 1 )
        backproject_mode = False
        while True:
            frame = cv.QueryFrame( self.capture )

            # Convert to HSV and keep the hue
            hsv = cv.CreateImage(cv.GetSize(frame), 8, 3)
            cv.CvtColor(frame, hsv, cv.CV_BGR2HSV)
            self.hue = cv.CreateImage(cv.GetSize(frame), 8, 1)
            cv.Split(hsv, self.hue, None, None, None)

            # Compute back projection
            backproject = cv.CreateImage(cv.GetSize(frame), 8, 1)

            # Run the cam-shift
            cv.CalcArrBackProject( [self.hue], backproject, hist )
            if self.track_window and is_rect_nonzero(self.track_window):
                crit = ( cv.CV_TERMCRIT_EPS | cv.CV_TERMCRIT_ITER, 10, 1)
                (iters, (area, value, rect), track_box) = cv.CamShift(backproject, self.track_window, crit)
                self.track_window = rect

            # If mouse is pressed, highlight the current selected rectangle
            # and recompute the histogram

            if self.drag_start and is_rect_nonzero(self.selection):
                sub = cv.GetSubRect(frame, self.selection)
                save = cv.CloneMat(sub)
                cv.ConvertScale(frame, frame, 0.5)
                cv.Copy(save, sub)
                x,y,w,h = self.selection
                cv.Rectangle(frame, (x,y), (x+w,y+h), (255,255,255))

                sel = cv.GetSubRect(self.hue, self.selection )
                cv.CalcArrHist( [sel], hist, 0)
                (_, max_val, _, _) = cv.GetMinMaxHistValue( hist)
                if max_val != 0:
                    cv.ConvertScale(hist.bins, hist.bins, 255. / max_val)
            elif self.track_window and is_rect_nonzero(self.track_window):
                cv.EllipseBox( frame, track_box, cv.CV_RGB(255,0,0), 3, cv.CV_AA, 0 )

            if not backproject_mode:
                cv.ShowImage( "CamShiftDemo", frame )
            else:
                cv.ShowImage( "CamShiftDemo", backproject)
            cv.ShowImage( "Histogram", self.hue_histogram_as_image(hist))

            c = cv.WaitKey(7) % 0x100
            if c == 27:
                break
            elif c == ord("b"):
                backproject_mode = not backproject_mode

if __name__=="__main__":
    demo = CamShiftDemo()
    demo.run()
    cv.DestroyAllWindows()

source code lengkap download di sini


Wednesday, June 29, 2016

Export to Excel menggunakan Templeting VB.NET

selamat malam gan....
kali inii ane mau membahas cara membuat laporan dengan menggunakan templet ms excel. mengapa kita harus menggunakan templet?? ya....agar lebih mudah aja gan penerapannya  jika di dalam laporan kita terdapat banyak headeratau beberapa penempatan header yang sesuai dengan kebutuha atau custome header. dikatakan lebih mudah di sini karena hanya memanggil sebuah templet yang sudah kita siapkan kemudian mengisi kolom kolom yang sesuai dengan kebutuhan laporan kita...
langsung aja gan kita kerjakan ....yapps

langkah 1 :
buka ms excel dan buat data kemudian simpan dalam bentuk templet (Excel 97-2003 Templet)

langkah ke 2 :
buka laman kerja vb nya gan
tambahkan reference Microsoft.Office.Interop.Excel

langkah ke 3 :
buat form designnya juga gan



langkah ke 4 :
 tuliskan kodingannya gan ...
Imports Excel = Microsoft.Office.Interop.Excel
Public Class Form1

    Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load

    End Sub

    Private Sub btn_cetak_Click(sender As Object, e As EventArgs) Handles btn_cetak.Click
        Dim excel_App As Excel.Application
        Dim excel_WorkBook As Excel.Workbook
        Dim excel_WorkSheet As Excel.Worksheet
        Dim misvalue As Object = System.Reflection.Missing.Value
        excel_App = New Excel.Application
        excel_WorkBook = excel_App.Workbooks.Open("E:\latihan visual studio\Templet Excel.xlt")
        excel_WorkSheet = excel_WorkBook.Worksheets(1)
        excel_WorkSheet = excel_WorkBook.Sheets("sheet1")
        excel_WorkSheet.Cells(3, 2) = txt_nama.Text
        excel_WorkSheet.Cells(4, 2) = txt_alamat.Text
        excel_App.Visible = True

    End Sub
End Class





langkah ke 6 :

jalankan programnya gan....






udah dulu ya gan/.......ane mau tidur dulu


membaca tulisan di ms excel dan di tampilkan di VB.NET

berjumpa kembali dengan ane gan....
kali ini ane mau membahas tentang cara membaca tulisan di ms excel dan di tampilkan di VB.NET.
caranya begini gan :

langkah ke1:
buat satu workbook dokumen ms excel dan simpan dengan nama Baca Excel
isikan pada kolom 1 dengan tulisan Selamat Datang di Ms Excel

langkah ke 2:
buka halaman kerja visual basic windows form app
untuk mengaktifkan object ms excel pada vb, maka kita harus menambahkan reference object tersebut.
pilih Microsoft.Office.Interop.Excel pada jendela Reference

langkah ke 3:
buat form design nya gan



langkah ke 4:
sekarang saatnya mengcoding gan....
Imports Excel = Microsoft.Office.Interop.Excel
Public Class Form1
    Public Function baca_excel(ByVal filename As String, ByVal sheetname As String, ByVal row As Integer, ByVal column As Integer)
        Dim excel_App As Excel.Application
        Dim excel_WorkBook As Excel.Workbook
        Dim excel_WorkSheet As Excel.Worksheet
        excel_App = New Excel.Application
        excel_WorkBook = excel_App.Workbooks.Open(filename)
        excel_WorkSheet = excel_WorkBook.Sheets(sheetname)
        Dim value As String
        value = excel_WorkSheet.Cells(row, column).value
        excel_WorkBook.Close()
        excel_App.Quit()
        Return value

    End Function
    Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
    End Sub
    Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
        MsgBox(baca_excel("E:\latihan visual studio\Baca Excel.xlsx", "sheet1", 1, 1))
    End Sub
End Class




note: perlu di ingat bahwa yang dibaca oleh program dari ms excel adalah kolom 1 baris 1.
MsgBox(baca_excel("E:\latihan visual studio\Baca Excel.xlsx", "sheet1", 1, 1)
jika anda ingin membacaisi atau nilai pada kolom atau baris yang lain, sesuaikan angka - angka tersebut gan


langkah ke 5 :
jalankan aplikasinya gan



selamat mencoba gan....






















export to excel menggunakan VB.NET

OK gan, kali ini ane mau ngebahas cara membuat laporan atau mencetak laporan dalam bentuk excel.
langsung aja gan...
langkah ke 1:
buka dulu tampilan kerja visual basic nya (windows form app )
simpan dengan nama laporan excel
langkah ke 2:
untuk mengaktifkan objek ms excel di vb, kita harus menambahkan Reference Object Microsoft.Office.Interop.Excel
langkah ke 3 :
buat design formnya seperti gambar berikut




langkah 4:
tulis codingannya gan, berikut adalah view code dari project kali ini


Imports Excel = Microsoft.Office.Interop.Excel
Public Class Form1
    Private Sub Button1_Click(sender As Object, e As EventArgs) Handles excel.Click
        Dim excel_App As Excel.Application
        Dim excel_WorkBook As Excel.Workbook
        Dim exel_WorkSheet As Excel.Worksheet
        Dim misValue As Object = System.Reflection.Missing.Value
        excel_App = New Excel.Application
        excel_WorkBook = excel_App.Workbooks.Add(misValue)
        exel_WorkSheet = excel_WorkBook.Sheets("sheet1")
        exel_WorkSheet.Cells(1, 1) = "selamat bos anda berhasil"
        exel_WorkSheet.SaveAs("E:\latihan visual studio\vbexcel.xlsx")
        excel_WorkBook.Close()
        excel_App.Quit()
        If MsgBox("file excel sukses di cetak, silahkan cek file tersebut di drive E:\latihan visual studio", MsgBoxStyle.OkOnly) = MsgBoxResult.Ok Then
            End
        End If
    End Sub
End Class



langkah ke 5:
jalankan (run) aplikasi tersebut dan klik tombol Cetak Dokumen Excel setelah program dijalankan
hasil dari proses tersebut adalah aplikasi akan mencetak dokumen Ms Excel dan file tersebut dapat dilihat dari lokasi data yang anda buat gan




selamat mencoba gan

















tes koneksi visual basic dengan mysql dari xampp

bagaimana   cara mengkoneksikan visual basic dengan data yang ada di MySQL xampp?
pastikan agan sudah menginstal visual studio dan xampp.


tambahkan referensi MySQL.Data di References
buat dulu form designnya seperti pada gambar dibawah ini



source code nya


Imports MySql.Data.MySqlClient
Public Class Form1
    Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
    End Sub
    Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
        Dim MySQLConnection As New MySqlConnection("host=127.0.0.1; user=root;database=sia_sekolahdb ")
        Try
            MySQLConnection.Open()
            MessageBox.Show("terhubung gan")

        Catch ex As Exception
            MessageBox.Show(ex.Message)
        End Try
    End Sub
End Class


jika sudah berhasil terkoneksi , tampilannya begini gan


ini video tutorialnya  gan




tes koneksi visual basic dengan mysql dari xampp

bagaimana   cara mengkoneksikan visual basic dengan data yang ada di MySQL xampp?
pastikan agan sudah menginstal visual studio dan xampp.


tambahkan referensi MySQL.Data di References
buat dulu form designnya seperti pada gambar dibawah ini



source code nya


Imports MySql.Data.MySqlClient
Public Class Form1
    Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
    End Sub
    Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
        Dim MySQLConnection As New MySqlConnection("host=127.0.0.1; user=root;database=sia_sekolahdb ")
        Try
            MySQLConnection.Open()
            MessageBox.Show("terhubung gan")

        Catch ex As Exception
            MessageBox.Show(ex.Message)
        End Try
    End Sub
End Class


jika sudah berhasil terkoneksi , tampilannya begini gan


ini video tutorialnya  gan




Tuesday, June 28, 2016

Bandung Command Center

pernah dengar kata "bandung juara " ? yaapp...bandung hadir dengan inovasinya. yang kita sebut sebagai " bandung command center. apa sih command center itu? orang bandung bilah sih .....nih gedung mirip ruangan tony stark yang di film iron man ...apa iya ? dari pada penasaran , mending nonton lagi aja gan,,,dari pada bengong ....


apa kata pak emil tentang bandung command center ini....

kalau dibandingkan dengan  di film film iron man gimana menurut ente gan?

semoga tidak hanya bandung, kota kota yang lainnya semoga bisa lebih baik.

face recognition cctv

face recognition merupakan salah satu kajian dari system kecerdasan buatan khususnya robot vision...
nampaknya kita ketinggalan ya gan ?system ini sudah di pakai diluar negeri sana...kalau di kita kapan gan? pokoke tonnton aja dulu gan video berikut ini, semoga menginspirasi

gimana gan, ada hasrat ingin mencoba mengembangkan ?

voice recognition

dunia semakin berkembang...teknologi pun semakin melesat....kaya yang satu ini nih gan, yah kaya di film film ironman gitu deh gan, sistemnya menggunakan voice recognition seperti Jarvis, penasaran? monggo tonton video berikut ini gan
selamat menyaksikan , ..


agan agan semua juga bisa kok bikin programmnya ...ikuti aja tutorial berikut.....karna software mako program yang diatas itu tuh gan harus bayar kalau mau coba...tak kalah hebatnya dengan mako, zira pun mengkodinya dengan visual c sharp, nampaknya agak mudah gan...


saya pun pernah mencoba program ini..namun saya mencoba untuk dikombinasikan dengan Arduino. penasaran? mangga di tontong saja video berikut ini


dunia akan semakin nyaman jika semua teknologi kita kendalikan dengan menggunakan  system kecerdasan buatan



sistem keamanan rumah


system keamanan rumah...nampaknya boleh dicoba tuh gan

fingerprint arduino source code

source codenya  ini......


#include <Adafruit_Fingerprint.h>
#include <SoftwareSerial.h>

int getFingerprintIDez();

// pin #2 is IN from sensor (GREEN wire)
// pin #3 is OUT from arduino  (WHITE wire)
SoftwareSerial mySerial(2, 3);
Adafruit_Fingerprint finger = Adafruit_Fingerprint(&mySerial);

// On Leonardo/Micro or others with hardware serial, use those! #0 is green wire, #1 is white
//Adafruit_Fingerprint finger = Adafruit_Fingerprint(&Serial1);

void setup() 
{
  while (!Serial);  // For Yun/Leo/Micro/Zero/...
 
  Serial.begin(9600);
  Serial.println("Adafruit finger detect test");

  // set the data rate for the sensor serial port
  finger.begin(57600);
 
  if (finger.verifyPassword()) {
    Serial.println("Found fingerprint sensor!");
  } else {
    Serial.println("Did not find fingerprint sensor :(");
    while (1);
  }
  Serial.println("Waiting for valid finger...");
}

void loop()                     // run over and over again
{
  getFingerprintIDez();
  delay(50);            //don't ned to run this at full speed.
}

uint8_t getFingerprintID() {
  uint8_t p = finger.getImage();
  switch (p) {
    case FINGERPRINT_OK:
      Serial.println("Image taken");
      break;
    case FINGERPRINT_NOFINGER:
      Serial.println("No finger detected");
      return p;
    case FINGERPRINT_PACKETRECIEVEERR:
      Serial.println("Communication error");
      return p;
    case FINGERPRINT_IMAGEFAIL:
      Serial.println("Imaging error");
      return p;
    default:
      Serial.println("Unknown error");
      return p;
  }

  // OK success!

  p = finger.image2Tz();
  switch (p) {
    case FINGERPRINT_OK:
      Serial.println("Image converted");
      break;
    case FINGERPRINT_IMAGEMESS:
      Serial.println("Image too messy");
      return p;
    case FINGERPRINT_PACKETRECIEVEERR:
      Serial.println("Communication error");
      return p;
    case FINGERPRINT_FEATUREFAIL:
      Serial.println("Could not find fingerprint features");
      return p;
    case FINGERPRINT_INVALIDIMAGE:
      Serial.println("Could not find fingerprint features");
      return p;
    default:
      Serial.println("Unknown error");
      return p;
  }
 
  // OK converted!
  p = finger.fingerFastSearch();
  if (p == FINGERPRINT_OK) {
    Serial.println("Found a print match!");
  } else if (p == FINGERPRINT_PACKETRECIEVEERR) {
    Serial.println("Communication error");
    return p;
  } else if (p == FINGERPRINT_NOTFOUND) {
    Serial.println("Did not find a match");
    return p;
  } else {
    Serial.println("Unknown error");
    return p;
  }  
 
  // found a match!
  Serial.print("Found ID #"); Serial.print(finger.fingerID);
  Serial.print(" with confidence of "); Serial.println(finger.confidence);
}

// returns -1 if failed, otherwise returns ID #
int getFingerprintIDez() {
  uint8_t p = finger.getImage();
  if (p != FINGERPRINT_OK)  return -1;

  p = finger.image2Tz();
  if (p != FINGERPRINT_OK)  return -1;

  p = finger.fingerFastSearch();
  if (p != FINGERPRINT_OK)  return -1;
 
  // found a match!
  Serial.print("Found ID #"); Serial.print(finger.fingerID);
  Serial.print(" with confidence of "); Serial.println(finger.confidence);
  return finger.fingerID;
}

Monday, January 25, 2016

control motor using face camera



using System;
using System.Collections.Generic;
using System.Drawing;
using System.Windows.Forms;
using Emgu.CV;
using Emgu.CV.Structure;
using Emgu.CV.CvEnum;
using System.IO;
using System.Diagnostics;
using System.Media;
using System.Net.Sockets;
using DirectShowLib;
using System.IO.Ports;


namespace MultiFaceRec
{
    public partial class FrmPrincipal : Form
    {
        //Declararation of all variables, vectors and haarcascades
        Image<Bgr, Byte> currentFrame;
        Capture grabber;
        HaarCascade face;
        HaarCascade eye;
        MCvFont font = new MCvFont(FONT.CV_FONT_HERSHEY_TRIPLEX, 0.5d, 0.5d);
        Image<Gray, byte> result, TrainedFace = null;
        Image<Gray, byte> gray = null;
        List<Image<Gray, byte>> trainingImages = new List<Image<Gray, byte>>();
        List<string> labels= new List<string>();
        List<string> NamePersons = new List<string>();
        int ContTrain, NumLabels, t;
        string name, names = null;
        bool CapturingProcess = false;
        TcpClient _tcpClient = null;
        bool CapRunning = false;
        private int _CameraIndex;
        bool CamAuto = false;
        bool ConnectAuto = false;
        string revision = "3.7.14";



        public FrmPrincipal()
        {
            InitializeComponent();
            //Load haarcascades for face detection
            face = new HaarCascade("haarcascade_frontalface_default.xml");
            //eye = new HaarCascade("haarcascade_eye.xml");
            try
            {
                //Load of previus trainned faces and labels for each image
                string Labelsinfo = File.ReadAllText(Application.StartupPath + "/TrainedFaces/TrainedLabels.txt");
                string[] Labels = Labelsinfo.Split('%');
                NumLabels = Convert.ToInt16(Labels[0]);
                ContTrain = NumLabels;
                string LoadFaces;

                for (int tf = 1; tf < NumLabels+1; tf++)
                {
                    LoadFaces = "face" + tf + ".bmp";
                    trainingImages.Add(new Image<Gray, byte>(Application.StartupPath + "/TrainedFaces/" + LoadFaces));
                    labels.Add(Labels[tf]);
                }
            }
            catch(Exception e)
            {
                //MessageBox.Show(e.ToString());
                MessageBox.Show("No faces have been trained. Please add at least a face (train with the Add face Button).", "EZ-Face Notice", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
            }
        }



        private void button1_Click(object sender, EventArgs e)
        {
            //Set the camera number to the one selected via combo box
            //This method is no longer used, DirectShow to display device name is now used
             //int CamNumber = -1;
             //CamNumber = int.Parse(cbCamIndex.Text);
        
            //This is for reading faces from a video file, for testing only at this time
             //String sFileName = @"c:\test.mp4";  //this work with new opencv_ffmpeg290.dll in the bin folder
             //grabber = new Capture(sFileName);  //this works, but crashes the app once movie stops

            //Initialize the capture device
            grabber = new Capture(_CameraIndex);
            grabber.QueryFrame();
            //Initialize the FrameGraber event
            Application.Idle += new EventHandler(FrameGrabber);
            button1.Enabled = false;
            CapturingProcess = true;
            btn_stop_capture.Enabled = true;
            groupBox1.Enabled = true;
            CapRunning = true;
        }


        private void button2_Click(object sender, System.EventArgs e)
        {
            try
            {
                //Trained face counter
                ContTrain = ContTrain + 1;

                //Get a gray frame from capture device
                gray = grabber.QueryGrayFrame().Resize(320, 240, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);

                //Face Detector
                MCvAvgComp[][] facesDetected = gray.DetectHaarCascade(
                face,
                1.2,
                10,
                Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
                new Size(20, 20));

                //Action for each element detected
                foreach (MCvAvgComp f in facesDetected[0])
                {
                    TrainedFace = currentFrame.Copy(f.rect).Convert<Gray, byte>();
                    break;
                }

                //resize face detected image for force to compare the same size with the
                //test image with cubic interpolation type method
                TrainedFace = result.Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
                trainingImages.Add(TrainedFace);
                labels.Add(textBox1.Text);

                //Show face added in gray scale
                imageBox1.Image = TrainedFace;

                //Write the number of triained faces in a file text for further load
                File.WriteAllText(Application.StartupPath + "/TrainedFaces/TrainedLabels.txt", trainingImages.ToArray().Length.ToString() + "%");

                //Write the labels of triained faces in a file text for further load
                for (int i = 1; i < trainingImages.ToArray().Length + 1; i++)
                {
                    trainingImages.ToArray()[i - 1].Save(Application.StartupPath + "/TrainedFaces/face" + i + ".bmp");
                    File.AppendAllText(Application.StartupPath + "/TrainedFaces/TrainedLabels.txt", labels.ToArray()[i - 1] + "%");
                }

                MessageBox.Show(textBox1.Text + "´s face detected and added.", "Training OK", MessageBoxButtons.OK, MessageBoxIcon.Information);
            }
            catch
            {
                MessageBox.Show("Enable the face detection first.", "Training Fail", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
            }
        }


        void FrameGrabber(object sender, EventArgs e)
        {
            label3.Text = "0";
            //label4.Text = "";
            NamePersons.Add("");

            //This is where the app most often encounters critical errors
            //Get the current frame form capture device
            currentFrame = grabber.QueryFrame().Resize(320, 240, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);

                    //Convert it to Grayscale
                    gray = currentFrame.Convert<Gray, Byte>();

                    //Face Detector
                    MCvAvgComp[][] facesDetected = gray.DetectHaarCascade(
                  face,
                  1.2,
                  10,
                  Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
                  new Size(20, 20));

                    //Action for each element detected
                    foreach (MCvAvgComp f in facesDetected[0])
                    {
                        t = t + 1;
                        result = currentFrame.Copy(f.rect).Convert<Gray, byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
                        //draw the face detected in the 0th (gray) channel with blue color
                        currentFrame.Draw(f.rect, new Bgr(Color.Red), 2);


                        if (trainingImages.ToArray().Length != 0)
                        {
                            //TermCriteria for face recognition with numbers of trained images like maxIteration
                        MCvTermCriteria termCrit = new MCvTermCriteria(ContTrain, 0.001);

                        //Eigen face recognizer
                        EigenObjectRecognizer recognizer = new EigenObjectRecognizer(
                           trainingImages.ToArray(),
                           labels.ToArray(),
                           3000,
                           ref termCrit);

                        name = recognizer.Recognize(result);

                            //Draw the label for each face detected and recognized
                        currentFrame.Draw(name, ref font, new Point(f.rect.X - 2, f.rect.Y - 2), new Bgr(Color.LightGreen));

                        }

                            NamePersons[t-1] = name;
                            NamePersons.Add("");


                        //Set the number of faces detected on the scene
                        label3.Text = facesDetected[0].Length.ToString();
                      
                        /*
                        //Set the region of interest on the faces
                       
                        gray.ROI = f.rect;
                        MCvAvgComp[][] eyesDetected = gray.DetectHaarCascade(
                           eye,
                           1.1,
                           10,
                           Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
                           new Size(20, 20));
                        gray.ROI = Rectangle.Empty;

                        foreach (MCvAvgComp ey in eyesDetected[0])
                        {
                            Rectangle eyeRect = ey.rect;
                            eyeRect.Offset(f.rect.X, f.rect.Y);
                            currentFrame.Draw(eyeRect, new Bgr(Color.Blue), 2);
                        }
                         */

                    }
                        t = 0;

                        //Names concatenation of persons recognized
                    for (int nnn = 0; nnn < facesDetected[0].Length; nnn++)
                    {
                        names = names + NamePersons[nnn] + ", ";
                    }
                    //Show the faces procesed and recognized
                    imageBoxFrameGrabber.Image = currentFrame;
                    label4.Text = names;
            //This logs the faces recognized
            if (String.IsNullOrEmpty(names))
            {
                textBox2.Text = "off";
                string m = comboBox1.Text.ToString();
                string s = textBox2.Text.ToString();
                sErial(m, s);
            }
            else
            {
                File.AppendAllText(Application.StartupPath + "/RecognitionLog/facelog.txt",
                         names + DateTime.Now.ToString() + Environment.NewLine);
                textBox2.Text = "on";
                string m = comboBox1.Text.ToString();
                string s = textBox2.Text.ToString();
                sErial(m, s);
            }

            //
            //This sends the name automatically - this needs to be enabled or disabled
            //
               tbX.Text = "\"" + names + "\"";
                 btnSetX.PerformClick(); //sends the informations to EZ-Builder
    //btngetX.PerformClick(); //gets the information from EZ-Builder - this works by only until recognition stops
            //This clears the name value for the next face to be recognized
               names = "";
            //Clear the list(vector) of names
               NamePersons.Clear();

                }

        private void FrmPrincipal_Load(object sender, EventArgs e)
        {
            this.Text = "EZ-Face " + revision;
            //cbCamIndex.Items.AddRange(Camera.GetVideoCaptureDevices());
            tbLog.Visible = false;
            groupBox1.Enabled = false;
            loadUserSettingsToolStripMenuItem.PerformClick();
            timer1.Start();
            timer2.Start();
                if (File.Exists(Application.StartupPath + "/RecognitionLog/facelog.txt"))
                {
                    var fileName = (Application.StartupPath + "/RecognitionLog/facelog.txt");
                    FileInfo fi = new FileInfo(fileName);
                    var size = fi.Length;
                    lb_facename_file.Text = "Face Log File size:          " + size;
                }
                if (CamAuto == true)
                {
                    lb_autorun.Text = "Enabled";
                    button1.PerformClick();
                }
                else
                {
                    lb_autorun.Text = "Disabled";
                }
                if (ConnectAuto == true)
                {
                    lb_autoconnect.Text = "Enabled";
                    btnConnect.PerformClick();
                }
                else
                {
                    lb_autoconnect.Text = "Disabled";
                }
              


        }

        private void Log(object txt, params object[] vals)
        {
            tbLog.AppendText(string.Format(txt.ToString(), vals));
            tbLog.AppendText(Environment.NewLine);
        }

        private void btnConnect_Click(object sender, EventArgs e)
        {
            tbLog.Visible = true;
            try
            {
                if (_tcpClient != null)
                    disconnect();
                else
                    connect();
            }
            catch (Exception ex)
            {
                Log("Error performing connection action: {0}", ex);
            }
        }

        private void disconnect()
        {
            if (_tcpClient != null)
                _tcpClient.Close();
            _tcpClient = null;
            btnConnect.Text = "Connect";
            Log("Disconnected");
            tbLog.Visible = false;
        }

        private void connect()
        {
            int port = Convert.ToInt32(tbPort.Text);
            Log("Attempting Connection to {0}:{1}", tbAddress.Text, port);
            _tcpClient = new TcpClient();
            IAsyncResult ar = _tcpClient.BeginConnect(tbAddress.Text, port, null, null);
            System.Threading.WaitHandle wh = ar.AsyncWaitHandle;

            try
            {
                if (!ar.AsyncWaitHandle.WaitOne(TimeSpan.FromSeconds(3), false))
                {
                    _tcpClient.Close();
                    throw new TimeoutException();
                }

                _tcpClient.EndConnect(ar);
            }
            finally
            {
                wh.Close();
            }

            _tcpClient.NoDelay = true;
            _tcpClient.ReceiveTimeout = 2000;
            _tcpClient.SendTimeout = 2000;
            btnConnect.Text = "Disconnect";
            Log("Connected");
            Log(readResponseLine());
        }

        private string sendCommand(string cmd)
        {
            try
            {
                Log("Sending: {0}", cmd);
                clearInputBuffer();
                _tcpClient.Client.Send(System.Text.Encoding.ASCII.GetBytes(cmd + Environment.NewLine));
                return readResponseLine();  //original exampled used this:  Log(readResponseLine());
            }
            catch (Exception ex)
            {
                Log("Command Error: {0}", ex);
                disconnect();
            }
            return string.Empty;
        }

        /// <summary>
        /// Clears any data in the tcp incoming buffer by reading the buffer into an empty byte array.
        /// </summary>
        private void clearInputBuffer()
        {
            if (_tcpClient.Available > 0)
                _tcpClient.GetStream().Read(new byte[_tcpClient.Available], 0, _tcpClient.Available);
        }

        /// <summary>
        /// Blocks and waits for a string of data to be sent. The string is terminated with a \r\n
        /// </summary>
        private string readResponseLine()
        {
            string str = string.Empty;
            do
            {

                byte[] tmpBuffer = new byte[1024];
                _tcpClient.GetStream().Read(tmpBuffer, 0, tmpBuffer.Length);
                str += System.Text.Encoding.ASCII.GetString(tmpBuffer);
            } while (!str.Contains(Environment.NewLine));

            // Return only the first line if multiple lines were received
            return str.Substring(0, str.IndexOf(Environment.NewLine));
        }


        private void btnSetX_Click(object sender, EventArgs e)
        {
            sendCommand(string.Format("$FaceName = {0}", tbX.Text));
           
           
        }
        void sErial(string Port_name, string data_Send)
        {
            SerialPort sp = new SerialPort(Port_name, 9600, Parity.None, 8, StopBits.One);
            sp.Open();

            sp.Write(data_Send);
            sp.Close();
        }
        private void btn_stop_capture_Click(object sender, EventArgs e)
        {
            {
                if (CapturingProcess == true)
                {
                    Application.Idle -= FrameGrabber;
                    grabber.Dispose();
                    //Application.Exit();   //this closes the application
                    //CapturingProcess = false;
                    //playorpause.Text = "Play";
                }
                else
                {
                    Application.Idle += FrameGrabber;
                    //CapturingProcess = true;
                    //playorpause.Text = "Pause";
                }
            }
            button1.Enabled = true;
            btn_stop_capture.Enabled = false;
            groupBox1.Enabled = false;
            imageBoxFrameGrabber.Image = null;  //sets the image to blank
            imageBox1.Image = null; //sets image to blank
        }



        private void deleteLearnedFacesToolStripMenuItem_Click(object sender, EventArgs e)
        {
            DialogResult d = MessageBox.Show("Are you sure you want to delete all learned faces?", "Question", MessageBoxButtons.YesNo, MessageBoxIcon.Question);

                if (d == DialogResult.Yes)
                {
                   
                    if (CapRunning = true)
                    {
                        btn_stop_capture.PerformClick();
                    }
                    Array.ForEach(Directory.GetFiles(Application.StartupPath + "/TrainedFaces"), File.Delete);
                        button1.Enabled = false;
                        DialogResult b = MessageBox.Show("You must close EZ-Face and re-open it for changes to take effect.", "EZ-Face Notice", MessageBoxButtons.OK, MessageBoxIcon.Asterisk);
                            if (b == DialogResult.OK)
                            {
                                Application.Exit();
                            }
                }
                else if (d == DialogResult.No)
                {
                    //Do nothing
                }
        }

        private void aboutToolStripMenuItem1_Click(object sender, EventArgs e)
        {
            var aboutForm = new About();
            aboutForm.revision = revision;
            aboutForm.Show();
        }

        private void j2RScientificComToolStripMenuItem_Click(object sender, EventArgs e)
        {
            System.Diagnostics.Process.Start("http://www.J2RScientific.com");
        }

        private void instructionsToolStripMenuItem_Click(object sender, EventArgs e)
        {
            if (File.Exists(@"C:\BotBrain\EZ-Face\Resources\ReadMe.txt"))
            {
                System.Diagnostics.Process.Start(@"C:\BotBrain\EZ-Face\Resources\ReadMe.txt");
            }
            else
            {
                MessageBox.Show("I'm sorry. The ReadMe.txt file could not be found.", "EZ-Face Notice", MessageBoxButtons.OK, MessageBoxIcon.Asterisk);
            }
           
        }

        private void deleteLogOfDetectedFacesToolStripMenuItem_Click(object sender, EventArgs e)
        {
            DialogResult f = MessageBox.Show("Are you sure you want to delete facelog.txt file?", "Question", MessageBoxButtons.YesNo, MessageBoxIcon.Question);
            if (f == DialogResult.Yes)
            {
                if (File.Exists(Application.StartupPath + "/RecognitionLog/facelog.txt"))
                {
                    File.Delete(Application.StartupPath + "/RecognitionLog/facelog.txt");
                }
            }
           
        }

        private void viewSavedFacesToolStripMenuItem_Click(object sender, EventArgs e)
        {
            Process.Start(Application.StartupPath + "/TrainedFaces");
        }

        private void viewLogOfDetectedFacesToolStripMenuItem_Click(object sender, EventArgs e)
        {
            if (File.Exists(Application.StartupPath + "/RecognitionLog/facelog.txt"))
            {
                System.Diagnostics.Process.Start(Application.StartupPath + "/RecognitionLog/facelog.txt");
            }
            else
            {
                MessageBox.Show("I'm sorry. The facelog.txt file could not be found.", "EZ-Face Notice", MessageBoxButtons.OK, MessageBoxIcon.Asterisk);
            }
        }

        private void saveSettingsToolStripMenuItem_Click(object sender, EventArgs e)
        {
            string httpAddress = tbAddress.Text;
            string portAddress = tbPort.Text;
                if (File.Exists(Application.StartupPath + "/Settings/user_settings.txt"))   
                {
                    File.Delete(Application.StartupPath + "/Settings/user_settings.txt");
                }
            File.AppendAllText(Application.StartupPath + "/Settings/user_settings.txt", httpAddress.ToString() + Environment.NewLine);
            File.AppendAllText(Application.StartupPath + "/Settings/user_settings.txt", portAddress.ToString() + Environment.NewLine);
            File.AppendAllText(Application.StartupPath + "/Settings/user_settings.txt", _CameraIndex.ToString() + Environment.NewLine);
            File.AppendAllText(Application.StartupPath + "/Settings/user_settings.txt", CamAuto.ToString() + Environment.NewLine);
            File.AppendAllText(Application.StartupPath + "/Settings/user_settings.txt", ConnectAuto.ToString() + Environment.NewLine);
            MessageBox.Show("Your user settings were saved.", "EZ-Face Notice", MessageBoxButtons.OK, MessageBoxIcon.Asterisk);
        }

        private void loadUserSettingsToolStripMenuItem_Click(object sender, EventArgs e)
        {
            if (File.Exists(Application.StartupPath + "/Settings/user_settings.txt"))
            {
                string[] lines = System.IO.File.ReadAllLines(Application.StartupPath + "/Settings/user_settings.txt");
                for (int i = 0; i < lines.Length; i++)
                {
                    string line = lines[i];
                    tbAddress.Text = lines[0];
                    tbPort.Text = lines[1];
                    // _CameraIndex = lines[2];
                    //int _CameraIndex = Int32.Parse(lines[2]);
                    _CameraIndex = Int32.Parse(lines[2]);
                    CamAuto = bool.Parse(lines[3]);
                    ConnectAuto = bool.Parse(lines[4]);
                }
            }
            else
            {
                MessageBox.Show("I'm sorry. The user_settings.txt file could not be found.", "EZ-Face Notice", MessageBoxButtons.OK, MessageBoxIcon.Asterisk);
            }
        }

        private void timer1_Tick(object sender, EventArgs e)
        {
            if (File.Exists(Application.StartupPath + "/RecognitionLog/facelog.txt"))
            {
                var fileName = (Application.StartupPath + "/RecognitionLog/facelog.txt");
                FileInfo fi = new FileInfo(fileName);
                var size = fi.Length;
                lb_facename_file.Text = "Face Log File size:          " + size;

                if (size > 1000000) //if file is greater then 1mb it will be deleted
                {
                    File.Delete(Application.StartupPath + "/RecognitionLog/facelog.txt");
                }
            }


        }

        private void cbCamIndex_SelectedIndexChanged(object sender, EventArgs e)
        {
            //-> Get the selected item in the combobox
             KeyValuePair<int, string> SelectedItem = (KeyValuePair<int, string>)cbCamIndex.SelectedItem;
            //-> Assign selected cam index to defined var
            _CameraIndex = SelectedItem.Key;
        }

        private void btn_refresh_camerlist_Click(object sender, EventArgs e)
        {
            //-> Create a List to store for ComboCameras
            List<KeyValuePair<int, string>> ListCamerasData = new List<KeyValuePair<int, string>>();
            //-> Find systems cameras with DirectShow.Net dll
            DsDevice[] _SystemCamereas = DsDevice.GetDevicesOfCat(FilterCategory.VideoInputDevice);
            int _DeviceIndex = 0;
            foreach (DirectShowLib.DsDevice _Camera in _SystemCamereas)
            {
                ListCamerasData.Add(new KeyValuePair<int, string>(_DeviceIndex, _Camera.Name));
                _DeviceIndex++;
            }
            //-> clear the combobox
                cbCamIndex.DataSource = null;
                cbCamIndex.Items.Clear();
            //-> bind the combobox
                cbCamIndex.DataSource = new BindingSource(ListCamerasData, null);
                cbCamIndex.DisplayMember = "Value";
                cbCamIndex.ValueMember = "Key";
            //DirectShowLib-2005 must be added as a reference in the bin folder
        }

        private void setCameraToAutoRunToolStripMenuItem_Click(object sender, EventArgs e)
        {
            CamAuto = true;
            lb_autorun.Text = "Enabled";
            MessageBox.Show("Remember, please make sure all user settings are set to the correct values - then use the File/Save User Settings feature.", "EZ-Face Notice", MessageBoxButtons.OK, MessageBoxIcon.Asterisk);
        }

        private void removeCameraAutoRunToolStripMenuItem_Click(object sender, EventArgs e)
        {
            CamAuto = false;
            lb_autorun.Text = "Disabled";
            MessageBox.Show("Remember, please make sure all user settings are set to the correct values - then use the File/Save User Settings feature.", "EZ-Face Notice", MessageBoxButtons.OK, MessageBoxIcon.Asterisk);
        }

        private void btngetX_Click(object sender, EventArgs e)
        {
            string retVal = sendCommand("print($EZfaceCMD)");
            tb_getX.Text = retVal;
            Log(retVal);
            if (retVal == "EZfaceSTOP")
            {
                if (button1.Enabled == false)
                {
                btn_stop_capture.PerformClick();
                }
            }
            if (retVal == "EZfaceSTART")
            {
                if (btn_stop_capture.Enabled == false)
                {
                    button1.PerformClick();
                }
            }
            if (retVal == "EZfaceCLOSE")
            {
                Application.Exit();
            }
        }

        private void setAutoConnectToolStripMenuItem_Click(object sender, EventArgs e)
        {
            ConnectAuto = true;
            lb_autoconnect.Text = "Enabled";
            MessageBox.Show("This settings enables auto communication connection upon the application running.  Remember, please make sure all user settings are set to the correct values - then use the File/Save User Settings feature.", "EZ-Face Notice", MessageBoxButtons.OK, MessageBoxIcon.Asterisk);
        }

        private void removeAutoConnectToolStripMenuItem_Click(object sender, EventArgs e)
        {
            ConnectAuto = false;
            lb_autoconnect.Text = "Disabled";
            MessageBox.Show("This settings disables auto communication connection upon the application running.  Remember, please make sure all user settings are set to the correct values - then use the File/Save User Settings feature.", "EZ-Face Notice", MessageBoxButtons.OK, MessageBoxIcon.Asterisk);

        }

        private void timer2_Tick(object sender, EventArgs e)
        {
            btngetX.PerformClick();
        }

        private void tbPort_TextChanged(object sender, EventArgs e)
        {

        }

        private void tbX_TextChanged(object sender, EventArgs e)
        {
           
        }
       
        private void button3_Click(object sender, EventArgs e)
        {
            string[] ports = SerialPort.GetPortNames();
            foreach (string port in ports)
            {
                comboBox1.Items.Add(port);
            }
        }

        private void label3_Click(object sender, EventArgs e)
        {

        }
     
   }
}