Lazarus

Programming => General => Topic started by: cpalx on November 02, 2012, 01:32:58 pm

Title: artificial neural networks | back propagation
Post by: cpalx on November 02, 2012, 01:32:58 pm
is there any place where i can download the source code of a back propagation  artificial neural network? (Delphi or Lazarus)
Title: Re: artificial neural networks | back propagation
Post by: Leledumbo on November 02, 2012, 05:30:07 pm
None that I know of (I implemented a part of it when I took the class a few years back, but I lost the code), however there's a Delphi binding for FANN (http://leenissen.dk/fann/html/files2/installation-txt.html).
Title: Re: artificial neural networks | back propagation
Post by: schuler on August 09, 2017, 11:13:26 pm
 :) Hello  :)
Just to let you know that I've just implemented a backpropagation algorithm in Lazarus:

https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ubackpropagation.pas

 :) Have fun :)
Title: Re: artificial neural networks | back propagation
Post by: avra on August 10, 2017, 12:22:14 am
Try here: http://forum.lazarus.freepascal.org/index.php/topic,32620.msg210473.html#msg210473
Title: Re: artificial neural networks | back propagation
Post by: Thaddy on August 10, 2017, 10:29:24 am
Try here: http://forum.lazarus.freepascal.org/index.php/topic,32620.msg210473.html#msg210473
Nope, you missed the point. That is actually very nice and concise code... Short is better than long. Playing with it..
Title: Re: artificial neural networks | back propagation
Post by: avra on August 10, 2017, 09:06:49 pm
Try here: http://forum.lazarus.freepascal.org/index.php/topic,32620.msg210473.html#msg210473
Nope, you missed the point
Would you be so kind to explain why? I don't get it. What's wrong with this simple back propagation pascal code that you get after following the first link:
http://wayback.archive.org/web/20100926121015/http://richardbowles.tripod.com:80/neural/source/simple1.htm
Title: Re: artificial neural networks | back propagation
Post by: schuler on August 10, 2017, 09:52:50 pm
 :) HELLO  :)

Just added a new constructor. It allows you to create a network with any size.

Code: Pascal  [Select][+][-]
  1. var
  2.   B: TBackPropagation;
  3. const
  4.   aNetworkSize: array[0..3] of integer = (3,1000,1000,3);
  5. begin
  6.   B := TBackPropagation.Create(aNetworkSize);
  7.  

The above means that you are creating a network with:
1 input layer with 3 nodes.
2 hidden layers with 1000 nodes each.
1 output layer with 3 nodes.

Just found that using "0.9" as output was saturating weights on networks with more than 10 layers. Using "0.1" seems to work well on these networks. The reason is the derivative function requiring high values when output is too close to 1 and -1.

Code: Pascal  [Select][+][-]
  1. const outputs : TBackInputOutput =
  2.   (// XOR, AND,   OR
  3.     (-0.1,-0.1,-0.1),
  4.     ( 0.1,-0.1, 0.1),
  5.     ( 0.1,-0.1, 0.1),
  6.     (-0.1, 0.1, 0.1)
  7.   );
  8.  

My next step is: I'll create a class specific for data classification extending the backpropagation class.

 :) Have Fun :)
Title: Re: artificial neural networks | back propagation
Post by: Leledumbo on August 12, 2017, 01:01:54 am
:) Hello  :)
Just to let you know that I've just implemented a backpropagation algorithm in Lazarus:

https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ubackpropagation.pas

 :) Have fun :)
Ah... if only you made it 7 years earlier, I might get an A from that class :P
Title: Re: artificial neural networks | back propagation
Post by: schuler on September 06, 2017, 06:10:15 am
:) Hello :)

It's interesting how far you can push backpropagation in a convolutional neural network.

I've just finished coding a convolutional neural network in plain object pascal. It's now time for a long testing phase.

In the case that you have never heard about convolutional neural networks, there is a good example here:
http://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html

This is the example from convjs:
Code: Javascript  [Select][+][-]
  1. layer_defs = [];
  2. layer_defs.push({type:'input', out_sx:32, out_sy:32, out_depth:3});
  3. layer_defs.push({type:'conv', sx:5, filters:16, stride:1, pad:2, activation:'relu'});
  4. layer_defs.push({type:'pool', sx:2, stride:2});
  5. layer_defs.push({type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'});
  6. layer_defs.push({type:'pool', sx:2, stride:2});
  7. layer_defs.push({type:'conv', sx:5, filters:20, stride:1, pad:2, activation:'relu'});
  8. layer_defs.push({type:'pool', sx:2, stride:2});
  9. layer_defs.push({type:'softmax', num_classes:10});
  10.  

This is a similar implementation in Pascal using the brand new code:
Code: Pascal  [Select][+][-]
  1.   NumClasses := 10;
  2.   NN := TNNet.Create();
  3.   NN.AddLayer( TNNetInput.Create(32,32,3) );
  4.   NN.AddLayer( TNNetConvolutionRelu.Create(16,5,2,0) );
  5.   NN.AddLayer( TNNetMaxPool.Create(2) );
  6.   NN.AddLayer( TNNetConvolutionRelu.Create(20,5,2,0) );
  7.   NN.AddLayer( TNNetMaxPool.Create(2) );
  8.   NN.AddLayer( TNNetConvolutionRelu.Create(20,5,2,0) );
  9.   NN.AddLayer( TNNetMaxPool.Create(2) );
  10.   NN.AddLayer( TNNetLayerFullConnect.Create(NumClasses) );
  11.   NN.Randomize();
  12.  

The pascal source code can be found here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/uconvolutionneuralnetwork.pas

 :) I wish everyone happy coding  :)
Title: Re: artificial neural networks | back propagation
Post by: elxanders on September 08, 2017, 05:07:37 am
Hello. I was going to experiment with neural networks, did not want to do it myself and found your library
(ubackpropagation). Haven't tried it yet, but so far it seems not very promising using in real work the library that has no saving/loading routines.
P.S. your convolutional module still has a notice it is unfinished and should not be used.
Title: Re: artificial neural networks | back propagation
Post by: schuler on September 12, 2017, 10:17:16 am
elxanders, sorry for taking so long to reply.

The current state is: coding self adapting learning rate. When both hiperbolic tangent and ReLU approaches :o are both stable and users don't need to spend a weekend fixing learning rate, it will be time for load/store methods. This might be ready in a month or two.

 :) happy coding to all pascal lovers :)
Title: Re: artificial neural networks | back propagation
Post by: schuler on September 29, 2017, 10:19:39 pm
:) Hello Pascal Lovers :)

@elxanders

Hiperbolic Tangent and ReLU approaches seem to be both numerically stable (no overflow nor underflow) as long as input values are transformed as per example and initialization calls the same Randomize function used under testing.

Numerical stability seems to be found on plain pascal code, AVX 32 bits and AVX 64 bits (see uvolume unit).

Also, there is no need anymore to spend hours setting the proper learning rate. Default behavior seems to work well on all 6 testing algorithms. The testing file is located here:

https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/testcnnalgo/testcnnalgo.lpr

All tests so far have been done with CIFAR10 classification.

Although more 48 hours testing runs are required, first 16 hours runs look promising.

So, it seems pretty much that soon I'll be coding load/save functions.

:) I wish happy coding to all pascal lovers :)
Title: Re: artificial neural networks | back propagation
Post by: schuler on November 04, 2017, 02:19:28 am
@elxanders

Just finished coding load/save of the neural network. You can use these methods:
Code: Pascal  [Select][+][-]
  1. function TNNet.SaveToString(): string;
  2. procedure TNNet.SaveToFile(filename: string);
  3.  
  4. procedure TNNet.LoadFromString(strData: string);
  5. procedure TNNet.LoadFromFile(filename: string);

You can also load/save data and/or structure only:

Code: Pascal  [Select][+][-]
  1. function TNNet.SaveDataToString(): string;
  2. procedure TNNet.LoadDataFromString(strData: string);
  3.  
  4. function TNNet.SaveStructureToString(): string;
  5. procedure TNNet.LoadStructureFromString(strData: string);

A general example about how to use TNNet class can be found here:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/testcnnalgo/testcnnalgo.lpr
Title: Re: artificial neural networks | back propagation
Post by: elxanders on November 05, 2017, 01:00:49 am
Thank you. That was nice to hear. So, it is time to play with it.

A short example of non-convolution network training would be great. As the one you had in backpropagation module.

After you create a network:

Code: Pascal  [Select][+][-]
  1. NN := TNNet.Create();
  2. NN.AddLayer( TNNetInput.Create(InputSize) );
  3. NN.AddLayer( TNNetLayerFullConnectReLU.Create(LayerSize) );
  4. NN.AddLayer( TNNetLayerFullConnectReLU.Create(LayerSize) );
  5. ...
  6. NN.Randomize();
  7. NN.SetLearningRate(0.08,0.9);

Further steps to organize training process?
I can see input requites TNNetVolume. Will it be fast enough creating and passing objects each time instead of a simple dataset? What about output and managing it?

Will this TNNetwork be useful to tune (1) numeric output, or classification only?

Does it support batches (and accumulate errors), or Backpropagate() needed to be called after each Compute()? Can SetLearningRate be changed later during training process without any problems in structure?

Sorry for lazy questions, but exploring somebody else's source code exerts a lot :)
Title: Re: artificial neural networks | back propagation
Post by: schuler on November 05, 2017, 01:32:03 am
Hello elxanders,
Here we go with a short example. In this example, the NN learns 3 logic operations XOR, AND and OR.

Code: Pascal  [Select][+][-]
  1. type TBackInputOutput = array[0..3] of array[0..2] of TNeuralFloat;
  2.  
  3. const outputs : TBackInputOutput =
  4.   (// XOR, AND,   OR
  5.     (-0.1,-0.1,-0.1),
  6.     ( 0.1,-0.1, 0.1),
  7.     ( 0.1,-0.1, 0.1),
  8.     (-0.1, 0.1, 0.1)
  9.   );
  10.  
  11. const inputs : TBackInputOutput =
  12.   ( // x1,   x2, bias
  13.     (-0.9, -0.9, 1),
  14.     (-0.9,  0.9, 1),
  15.     ( 0.9, -0.9, 1),
  16.     ( 0.9,  0.9, 1)
  17.   );
  18.  
  19. procedure TForm1.BitBtn6Click(Sender: TObject);
  20. var
  21.   NN: TNNet;
  22.   I: integer;
  23.   Cnt: integer;
  24.   pOutPut: TNNetVolume;
  25.   vInputs, vOutput: TBackInputOutput;
  26.   totalTimeSeconds, startTime, finishTime: double;
  27.   shouldPrint: boolean;
  28. begin
  29.   shouldPrint := true;
  30.  
  31.   vInputs := inputs;
  32.   vOutput := outputs;
  33.  
  34.   pOutPut := TNNetVolume.Create(3,1,1,1);
  35.   NN := TNNet.Create();
  36.  
  37.   NN.AddLayer( TNNetInput.Create(3) );
  38.   NN.AddLayer( TNNetLayerFullConnect.Create(3) );
  39.   NN.AddLayer( TNNetLayerFullConnect.Create(3) );
  40.   NN.AddLayer( TNNetLayerFullConnect.Create(3) );
  41.  
  42.   NN.Randomize();
  43.   startTime := now();
  44.   for I := 1 to 3000 do
  45.   begin
  46.     if shouldPrint then WriteLn();
  47.     for Cnt := Low(inputs) to High(inputs) do
  48.     begin
  49.       NN.Compute(vInputs[cnt]);
  50.       NN.GetOutput(pOutPut);
  51.       NN.Backpropagate(vOutput[cnt]);
  52.       if shouldPrint then
  53.       WriteLn
  54.       (
  55.         I:7,'x',Cnt,' Output:',
  56.         pOutPut.Raw[0]:5:2,' ',
  57.         pOutPut.Raw[1]:5:2,' ',
  58.         pOutPut.Raw[2]:5:2,' - Training data:',
  59.         vOutput[cnt][0]:5:2,' ',
  60.         vOutput[cnt][1]:5:2,' ' ,
  61.         vOutput[cnt][2]:5:2,' '
  62.       );
  63.     end;
  64.   end;
  65.   finishTime := now();
  66.   totalTimeSeconds := (finishTime - startTime) * 24 * 60 * 60;
  67.   writeln('Total run time:', (totalTimeSeconds): 10: 5, ' seconds.');
  68.  
  69.   NN.Free;
  70.   pOutPut.Free;
  71. end;
  72.  
  73.  

You have 2 questions that affect performance:
* About creating and passing objects. For big datasets, I create all required objects before running the NN. There is an example here: testcnnalgo.lpr .
For simplicity, you can also use dynamic arrays with single elements.
* About batches: although I do not have batches and it could make the NN faster, some of my own benchmarks running against some well known APIs (won't give names for now) reveal that this implementation is up to 10x faster with CPU only (I mean, CPU implementation against CPU implementation). The OpenCL version is in the cards to come. Yes - you need to call Backpropagate each time.

The latest version of testcnnalgo.lpr shows an example with a decreasing learning rate. Yes, you can change learning rate and inertia at any time. I highly recommend running testcnnalgo.lpr program via command line to have a feeling.

There is a new implementation here:
https://github.com/joaopauloschuler/neural-api/tree/master/examples/XorAndOr

Feel free to share questions as they might help other users.

Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 07, 2018, 12:40:25 pm
Impressive project and implementation @schuler. Your work is gold. :)

I wonder why you declared TVolume as Generic though? From what I've seen it isn't really necessary, is it? Because in the whole project you implement TVolume only once in uvolume.pas as TNNetVolume = class (specialize TVolume<TNeuralFloat>). And TNeuralFloat is of type Single.
Title: Re: artificial neural networks | back propagation
Post by: schuler on July 11, 2018, 05:30:45 am
 :) Hello mw108 :)

Thank you for the "impressive". I've been adding new features for an year now and it seems it's going to continue to be like this.

At this moment, the AVX implementation only supports single type and the currently in development OpenCL dot product engine only supports single type too (just tested on Ubuntu 18.04 with NVIDIA K80 and it was a disaster in performance - I hope due to a bug somewhere). Anyway, for now, there is support for the single type only.

When I started coding it, I was asking myself if I will ever use another floating point type. Therefore, for now, there is a generic implementation and AVX/AVX2 implementations for single type only. BTW, in the case that you would like to give it a go, add these defines to your code: "Release, AVX64 and AVX2". Your CPU will fly like a GPU. I've been testing in cloud servers with up to 64 vCPUs with AVX. I must say that it flies.

I haven't formally published results yet. But I intend to do it in the next 12 months.

Let me know if you need any help.

:) Wish everyone happy pascal coding. :)
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 11, 2018, 07:45:00 pm
Thanks for the reply and explanation.

I see. But if you want to implement another floating point type, like Double, you have to get rid of that:

Code: Pascal  [Select][+][-]
  1. TNeuralFloatArr = array[0..300000000] of TNeuralFloat;

What was the reason for implementing something like this? I see that you do some System.Move() operations with the Arrays. But can't you use a List type or something? This array is a real memory hog. :D

I actually started to experiment with a List type, but then got stuck at that point where you do the direct memory operations.

Code: Pascal  [Select][+][-]
  1. interface
  2.  
  3.   { TNeuralList }
  4.  
  5.   generic TNeuralList<T> = class(specialize TFPGList<T>)
  6.   private
  7.     procedure EnsureRange(Index: Integer);
  8.  
  9.     function Get(Index: Integer): T; {$ifdef FGLINLINE} inline; {$endif}
  10.     procedure Put(Index: Integer; const Item: T); {$ifdef FGLINLINE} inline; {$endif}
  11.   public
  12.     // Copies <Range> amount of items from <Source> at <StartIndex> to <Self> at <TargetIndex>
  13.     procedure Copy(Source: specialize TNeuralList<T>; SourceIndex: Integer; TargetIndex: Integer; Range: Integer);
  14.  
  15.     property Items[Index: Integer]: T read Get write Put; default;
  16.   end;
  17.  
  18.   TNeuralFloatList = class(specialize TNeuralList<TNeuralFloat>);
  19.  
  20.   { TVolume }
  21.   generic TVolume<T> = class(TObject)
  22.     ...
  23.   public
  24.     // FData was made public to allow other fast operations
  25.     FData: specialize TNeuralList<T>; //array of T;
  26.     ...
  27.   end;
  28.  
  29. implementation
  30.  
  31. { TNeuralList }
  32.  
  33. procedure TNeuralList.EnsureRange(Index: Integer);
  34. var
  35.   i: Integer;
  36. begin
  37.   if (Index > Self.Count - 1) then
  38.     begin
  39.       for i := Self.Count - 1 to Index do
  40.         Add(0.0);
  41.     end;
  42. end;
  43.  
  44. function TNeuralList.Get(Index: Integer): T;
  45. begin
  46.   EnsureRange(Index);
  47.   Result := inherited;
  48. end;
  49.  
  50. procedure TNeuralList.Put(Index: Integer; const Item: T);
  51. begin
  52.   EnsureRange(Index);
  53.   inherited;
  54. end;
  55.  
  56. procedure TNeuralList.Copy(Source: specialize TNeuralList<T>; SourceIndex: Integer; TargetIndex: Integer; Range: Integer);
  57. var
  58.   I: Integer;
  59. begin
  60.   EnsureRange(TargetIndex + Range);
  61.   for I := 0 to Range - 1 do
  62.         Self[TargetIndex + I] := Source[SourceIndex + I];
  63. end;
  64.  

The idea of that List is that it takes care of the boundaries itself, so that you can access it like an array:

Code: Pascal  [Select][+][-]
  1. var
  2.   TestList: TNeuralFloatList;
  3.   nf: TNeuralFloat;
  4. begin
  5.   TestList := TNeuralFloatList.Create;
  6.   try
  7.     nf := 1.2345;
  8.     TestList[1000] := nf;
  9.     nf := TestList[1000];
  10.     WriteLn(nf, ' Count: ', TestList.Count); // 1.2345
  11.   finally
  12.     TestList.Free;
  13.   end;
  14. end;
  15.  

My idea was to change the TNNetVolumeList.ConcatInto and TNNetVolumeList.SplitFrom functions into something like this:

Code: Pascal  [Select][+][-]
  1. procedure TNNetVolumeList.ConcatInto(V: TNNetVolume);
  2. var
  3.   TotalSize: integer;
  4.   I: integer;
  5.   CurrPos: integer;
  6. begin
  7.   if (Count>0) then
  8.   begin
  9.  
  10.     TotalSize := Self.GetTotalSize();
  11.     if V.Size < TotalSize then
  12.     begin
  13.       V.ReSize(TotalSize,1,1);
  14.     end;
  15.  
  16.     CurrPos := 0;
  17.     for I := 0 to Count - 1 do
  18.     begin
  19.       V.FData.Copy(Self[I].FData, 0, CurrPos, Self[I].Size);
  20.       //system.Move(Self[I].FData[0], V.FData[CurrPos], Self[I].Size * SizeOf(TNeuralFloat));
  21.       CurrPos += Self[I].Size;
  22.     end;
  23.   end;
  24. end;
  25.  
  26. procedure TNNetVolumeList.SplitFrom(V: TNNetVolume);
  27. var
  28.   TotalSize: integer;
  29.   I: integer;
  30.   CurrPos: integer;
  31. begin
  32.   if (Count>0) then
  33.   begin
  34.  
  35.     TotalSize := Self.GetTotalSize();
  36.     if V.Size < TotalSize then
  37.     begin
  38.       V.ReSize(TotalSize,1,1);
  39.     end;
  40.  
  41.     CurrPos := 0;
  42.     for I := 0 to Count - 1 do
  43.     begin
  44.       Self[I].FData.Copy(V.FData, CurrPos, 0, Self[I].Size);
  45.       //system.Move(V.FData[CurrPos], Self[I].FData[0], Self[I].Size * SizeOf(TNeuralFloat));
  46.       CurrPos += Self[I].Size;
  47.     end;
  48.   end;
  49. end;
  50.  

But I'm not sure if Self.Size represents the number of records in the List, like List.Count.

What do you think?
Title: Re: artificial neural networks | back propagation
Post by: Phil on July 11, 2018, 08:06:40 pm
Thanks for the reply and explanation.

Anyone interested in TensorFlow instead of rolling your own neural network?

https://macpgmr.github.io/MacXPlatform/PascalForTensorFlow.html

Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 11, 2018, 08:35:29 pm
But I'm not sure if Self.Size represents the number of records in the List, like List.Count.

Ok, I just saw that Size resembles the size of all dimensions of the Volume:

Code: Pascal  [Select][+][-]
  1. FSize := FSizeX * FSizeY * FDepth;
Title: Re: artificial neural networks | back propagation
Post by: schuler on July 12, 2018, 12:33:32 am
Hello Phil.

Thank you for sharing the pascal for tensorflow link.

Quote
Anyone interested in TensorFlow instead of rolling your own neural network?

To give perspective where CAI is, have a look at how the tensorflow CIFAR10 example is implemented:

https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py

Have a look at how the tensorflow NN architecture is defined in the method def inference(images).

In tensorflow, you need loads of source code lines to define a relatively small model. In CAI, this is how you define a similar model:

Code: Pascal  [Select][+][-]
  1.     NN.AddLayer( TNNetInput.Create(32,32,iInputDepth) );        
  2.         NN.AddLayer( TNNetConvolutionReLU.Create(64, 5, 2, 1) );
  3.         NN.AddLayer( TNNetMaxPool.Create(3) );
  4.         NN.AddLayer( TNNetLocalResponseNormDepth.Create(11) );
  5.         NN.AddLayer( TNNetConvolutionReLU.Create(64, 5, 2, 1) );
  6.         NN.AddLayer( TNNetMaxPool.Create(2) );
  7.         NN.AddLayer( TNNetLocalResponseNormDepth.Create(11) );
  8.         NN.AddLayer( TNNetFullConnectReLU.Create(384) );
  9.         NN.AddLayer( TNNetFullConnectReLU.Create(192) );
  10.     NN.AddLayer( TNNetFullConnectReLU.Create(NumClasses) );
  11.     NN.AddLayer( TNNetSoftMax.Create() );
  12.        

As you can see, using CAI it's ultra easy to define similar NNs. Plus, as most of the code is pure Pascal, if you intend to create a new layer type, you can!!!

Besides creating the NN itself, adding data augmentation is super easy:
Code: Pascal  [Select][+][-]
  1. //      Random crop and resize
  2.       CropSizeX := 4 + random(5);
  3.       CropSizeY := 4 + random(5);
  4.  
  5.       ImgInputCp.CopyCropping(ImgVolumes[ImgIdx], random(CropSizeX), random(CropSizeY),ImgVolumes[ImgIdx].SizeX-CropSizeX, ImgVolumes[ImgIdx].SizeY-CropSizeY);
  6.       ImgInput.CopyResizing(ImgInputCp, ImgVolumes[ImgIdx].SizeX, ImgVolumes[ImgIdx].SizeY);
  7.  
  8.     // flip is always used in training
  9.     if Random(1000) > 500 then
  10.     begin
  11.       ImgInput.FlipX();
  12.     end;
  13.  
  14.     // one salt and one pepper for each 200 pixels
  15.     ImgInput.AddSaltAndPepper( (ImgInput.SizeX * ImgInput.SizeY) div 200 );
  16.  
  17.     if (color_encoding = csEncodeRGB) and ( Random(1000) > 750 ) then
  18.     begin
  19.       ImgInput.RgbToGray();
  20.     end;
  21.  
  22.         // Random "channel add"
  23.         ImgInput.AddAtDepth(0, ( (Random(1024)-512)*FNoiseLevel) / 2560 );
  24.         if ImgInput.Depth >= 1 then ImgInput.AddAtDepth(1, ( (Random(1024)-512)*FNoiseLevel) / 2560 );
  25.         if ImgInput.Depth >= 2 then ImgInput.AddAtDepth(2, ( (Random(1024)-512)*FNoiseLevel) / 2560 );
  26.  

Besides CIFAR-10 classification, there is another interesting experiment with CAI:
https://www.youtube.com/watch?v=jdFixaZ2P4w

Some results were posted here:
http://forum.lazarus.freepascal.org/index.php?topic=39305.0

I'm currently typing a paper for peer review with impressive results in regards to CIFAR-10 classification.

:) wish everyone happy pascal coding :)
Title: Re: artificial neural networks | back propagation
Post by: Phil on July 12, 2018, 12:43:10 am
I'm currently typing a paper for peer review with impressive results in regards to CIFAR-10 classification.

It would be interesting if you would code the MNIST.swift example and show the code here. This is a practical example of the sort of thing that people are now doing routinely with software like Apple's Create ML (https://developer.apple.com/documentation/create_ml).

https://github.com/tensorflow/swift-models/tree/master/MNIST

I've included a Pascal version of MNIST.swift that uses the TensorFlow library.
Title: Re: artificial neural networks | back propagation
Post by: schuler on July 12, 2018, 05:41:00 am
Hi Phil,
Thank you for sharing links so it gives me opportunity to see an implementation for MNIST.

The equivalent NN architecture with CAI will be:
Code: Pascal  [Select][+][-]
  1.     NN := TNNet.Create();
  2.     NN.AddLayer( TNNetInput.Create(28, 28, 1) ); // 28*28*1 = 784      
  3.     NN.AddLayer( TNNetFullConnect.Create(30) );
  4.     NN.AddLayer( TNNetFullConnect.Create(10) );

The main training block (one epoch) will look like this:
Code: Pascal  [Select][+][-]
  1.     for Cnt := Low(inputs) to High(inputs) do
  2.     begin
  3.       NN.Compute(inputs[cnt]);
  4.       NN.GetOutput(pOutPut);
  5.       NN.Backpropagate(vOutput[cnt]);
  6.     end;

You can define learning rate and inertia with:
Code: Pascal  [Select][+][-]
  1. NN.SetLearningRate(0.001, 0.9);

Title: Re: artificial neural networks | back propagation
Post by: schuler on July 12, 2018, 05:54:56 am
@mw108
Quote
What was the reason for implementing something like this?

I needed this:
Code: Pascal  [Select][+][-]
  1.   TNeuralFloatArr = array[0..300000000] of TNeuralFloat;

To be able to declare this:
Code: Pascal  [Select][+][-]
  1.   TNeuralFloatArrPtr = ^TNeuralFloatArr;

With this type, I can then use:
Code: Pascal  [Select][+][-]
  1.   TNNetVolume = class (specialize TVolume<TNeuralFloat>)
  2.     private
  3.       FDataPtr: TNeuralFloatArrPtr;

I can now call assembler code passing pointers as parameters:
Code: Pascal  [Select][+][-]
  1. AVXDotProduct(FDataPtr, Original.FDataPtr, FSize)

It's now simple to interface Volumes with OpenCL code. Have a look at this file:
https://sourceforge.net/p/cai/svncode/HEAD/tree/trunk/lazarus/libs/ueasyopencl.pas

Code: Pascal  [Select][+][-]
  1. function TEasyOpenCLV.WriteBuffer(buffer: cl_mem; V: TNNetVolume): integer;
  2. begin
  3.   Result := WriteBuffer(buffer, V.GetMemSize(), V.DataPtr);
  4. end;
Title: Re: artificial neural networks | back propagation
Post by: schuler on July 12, 2018, 07:29:48 am
@mw108
Quote
Ok, I just saw that Size resembles the size of all dimensions of the Volume:

YES!!! The volume is both a 3D and 1D data structure physically implemented as a dynamic array. In some APIs, you have to transform from 3D to 1D such as when connecting the last convolutional layer to the first fully connected layer. This transformation ins't required here as both representations live together.

You can access as 3D via:
Code: Pascal  [Select][+][-]
  1.     property Data[x, y, d: integer]: T read Get write Store; default;

Or, you can access as 1D via:
Code: Pascal  [Select][+][-]
  1.     property Raw[x: integer]: T read GetRaw write SetRaw;
Title: Re: artificial neural networks | back propagation
Post by: SymbolicFrank on July 12, 2018, 12:08:02 pm
Hi schuler,

Very interesting! Some questions:

Most processes are multi-step (flowchart): they require multiple actions in sequence (probably all a neural network as well) to get a result. How do you calculate a score for the learning from that? Store all intermediate values? But how do you pinpoint the exact step that was weakest and should be tweaked most? Or would you need a kind of unit test for each step?

Is the learning always a separate pass to create a file with biases and weights, or can the network keep on learning as it goes? It would need some feedback for that, which is probably generated by a different process and so might have a different format, and might have to be processed itself before it becomes useful. How would you do that? Use another neural network to process the feedback? But that should have learning feedback as well. Etc.

In human vision, first we detect edges and then shapes. Those shapes are extrapolated and normalized (rotate, tilt, pan, resize, etc), and should then be handled by their own neural network for processing. Handed over to the right sub-process / step in the flowchart, so to say. How would you go about doing that?

Title: Re: artificial neural networks | back propagation
Post by: Phil on July 12, 2018, 02:09:40 pm
Hi Phil,
Thank you for sharing links so it gives me opportunity to see an implementation for MNIST.

I meant a working program functionally equivalent to MNIST.
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 12, 2018, 07:44:20 pm
I needed this:
Code: Pascal  [Select][+][-]
  1.   TNeuralFloatArr = array[0..300000000] of TNeuralFloat;

To be able to declare this:
Code: Pascal  [Select][+][-]
  1.   TNeuralFloatArrPtr = ^TNeuralFloatArr;

With this type, I can then use:
[...]

Yes, I saw that. But you are aware that this array clogs 1.2GB of the max. 2GB global memory? Why does it need to have 300 Mio elements? Why not 250 Mio or 100 Mio or 10 Mio? Is there any specific reason for that number?
Title: Re: artificial neural networks | back propagation
Post by: Phil on July 12, 2018, 07:51:53 pm
Yes, I saw that. But you are aware that this array clogs 1.2GB of the max. 2GB global memory? Why does it need to have 300 Mio elements? Why not 250 Mio or 100 Mio or 10 Mio? Is there any specific reason for that number?

It may be he's only using it as a way of referencing as an array a block of memory that's never bigger than that dimension.

Another approach might be to use dynamic arrays. That's what I did in MNIST.pas and in the TTensor class:

https://macpgmr.github.io/MacXPlatform/PascalForTensorFlow.html



Title: Re: artificial neural networks | back propagation
Post by: SymbolicFrank on July 12, 2018, 08:42:43 pm
This is how the list of a TStrings is defined:

Code: Pascal  [Select][+][-]
  1. TStringItemList = array[0..MaxListSize] of TStringItem;

MaxListSize is defined as:

Code: Pascal  [Select][+][-]
  1. MaxListSize = Maxint div 16;

That's how it works.
Title: Re: artificial neural networks | back propagation
Post by: schuler on July 12, 2018, 09:35:04 pm
Hi everyone :) ,

Quote
But you are aware that this array clogs 1.2GB of the max. 2GB global memory?

I think that SymolicFrank got the idea. This array is never ever allocated. I just need a pointer type. The big number is there just to avoid range checks when debugging. BTW, I think that I should replace the big number by a constant as shown in the SymbolicFrank's post.

This is the heart of TVolume:
Code: Pascal  [Select][+][-]
  1.     FData: array of T;

As you can see, it's not a static array. The pointer is always kept updated:
Code: Pascal  [Select][+][-]
  1. procedure TNNetVolume.ReSize(pSizeX, pSizeY, pDepth: integer);
  2. begin
  3.   inherited ReSize(pSizeX, pSizeY, pDepth);
  4.   FDataPtr := addr(FData[0]);
  5. end;
  6.  
  7. function TNNetVolume.GetMemSize(): integer;
  8. begin
  9.   Result := FSize * SizeOf(TNeuralFloat);
  10. end;
Title: Re: artificial neural networks | back propagation
Post by: schuler on July 12, 2018, 09:52:20 pm
@Phil
Quote
https://macpgmr.github.io/MacXPlatform/PascalForTensorFlow.html

Very interesting project. I might need your help to properly benchmark CAI against TF in the future.

I have a question to you: have you benchmarked PascalForTensorFlow against Python with TF? I would presume that Pas2TF is a lot faster than Python + TF.
Title: Re: artificial neural networks | back propagation
Post by: schuler on July 12, 2018, 10:26:12 pm
@SymbolicFrank

Good questions. I'll reply based on my experience. It means that if you look at a computer science book, you might find other points of view.

Quote
Most processes are multi-step (flowchart): they require multiple actions in sequence (probably all a neural network as well) to get a result. How do you calculate a score for the learning from that?

In my opinion, the 2 main drivers in supervised learning are:

* The sometimes called error or distance or delta from the result to the expected result. In modern times, this is called Loss.
* With the delta/error/distance/loss associated to the slope/derivative/gradient, you know where to apply more correction/learning/descent. For each step of the process, the derivative is calculated for each neuron.

CAI implements the Stochastic Gradient Descent algorithm. CAI calculates an error in the last NN layer and then backpropagates through all layers layer by layer. The backpropagation is done recursively.

Quote
But how do you pinpoint the exact step that was weakest and should be tweaked most? Or would you need a kind of unit test for each step?
This is done via derivatives/slope/gradient and delta/error/distance/loss. If you have a big slope with a big delta on any given neuron/weight, there will be big learning/correction.

Quote
Is the learning always a separate pass to create a file with biases and weights, or can the network keep on learning as it goes? It would need some feedback for that, which is probably generated by a different process and so might have a different format, and might have to be processed itself before it becomes useful. How would you do that? Use another neural network to process the feedback? But that should have learning feedback as well

In the supervised learning, you can keep training for as long as you want. Unfortunately, the learning ability for any computational device (biological or artificial) is always limited in space, energy and information (bits).

As an example, when you are happy with the quality of a "face detection" NN, you can freeze your NN, save your NN to file, compile the code to an Android device (resource limited) and run just the forward pass in your mobile device.

To be continued...
Title: Re: artificial neural networks | back propagation
Post by: Phil on July 12, 2018, 10:32:25 pm
As an example, when you are happy with the quality of a "face detection" NN, you can freeze your NN, save your NN to file, compile the code to an Android device (resource limited) and run just the forward pass in your mobile device.

Good example. Here's a fairly technical discussion of how Apple did the neural network for face detection:

https://machinelearning.apple.com/2017/11/16/face-detection.html

For Face ID, they use an in-processor neural engine:

https://www.pocket-lint.com/phones/news/apple/142207-what-is-apple-face-id-and-how-does-it-work
Title: Re: artificial neural networks | back propagation
Post by: Phil on July 12, 2018, 10:37:47 pm
@Phil
Quote
https://macpgmr.github.io/MacXPlatform/PascalForTensorFlow.html

Very interesting project. I might need your help to properly benchmark CAI against TF in the future.

I have a question to you: have you benchmarked PascalForTensorFlow against Python with TF? I would presume that Pas2TF is a lot faster than Python + TF.

Yes, benchmarking sounds interesting.

I don't have a Python version of MNIST, just the Swift version to check my Pascal version's results against. I would guess that almost all processing takes place within the TensorFlow library, not in the calling, so choice of language shouldn't matter much for this example (mostly interative execution of tensor operations by TensorFlow, not much really going on in the Swift or Pascal code).

A non-TensorFlow implementation of MNIST would be interesting to compare memory use as well as performance. TensorFlow can use external GPUs and TPUs, but I have no experience working with them.

Having said that, the discussion of Python's challenges (performance, concurrency, type checking, etc.) and why the Google Brain team branched Swift is quite interesting:

https://github.com/tensorflow/swift/blob/master/docs/WhySwiftForTensorFlow.md
Title: Re: artificial neural networks | back propagation
Post by: SymbolicFrank on July 12, 2018, 10:46:36 pm
Thanks, schuler.

So: everything is a single process and should be taught independently, up front.

Yes, but that limits the usability severely.
Title: Re: artificial neural networks | back propagation
Post by: schuler on July 13, 2018, 03:26:37 am
Quote
In human vision, first we detect edges and then shapes.
In machine vision, it's exactly the same.

There is an attached image. In this image, we can see the first neuronal layer as 2D RGB images so we can have an idea about what the network is learning. As you can see, some patterns are clearly edge detectors while others are clearly color detectors. It's interesting to note that the pascal code doesn't say "learn edges". This aspect (edges and colors) seems to be a good solution in the gradient descent.

The attached image was generated with a prototype inside CAI source code that anyone can run and reproduce the result. With just 180k weights, the NN has 85% accuracy at the CIFAR-10 test dataset. There is a slight overfitting from 90% to 85%.
Title: Re: artificial neural networks | back propagation
Post by: SymbolicFrank on July 13, 2018, 09:30:18 am
What does that picture show?
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 13, 2018, 01:36:01 pm
Quote
But you are aware that this array clogs 1.2GB of the max. 2GB global memory?

I think that SymolicFrank got the idea. This array is never ever allocated. I just need a pointer type. The big number is there just to avoid range checks when debugging. BTW, I think that I should replace the big number by a constant as shown in the SymbolicFrank's post.

Ok, I understand now. Thanks for the explanation. :)

The only reason I'm stressing this is that it seems to kind of limit the available resources when working with your framework. I mean 800 MB left is still quite a lot to work with. But in the end it all depends on the project size and complexity you might want to use CAI in.

Also, as I said earlier, if you want to use a different data type than Single, for instance Double, you already hit the limits, even if you reduce the array size significantly.

As Phil said, a dynamic array implementation might be better here.
Title: Re: artificial neural networks | back propagation
Post by: SymbolicFrank on July 13, 2018, 03:46:25 pm
mw108, it is already dynamic. There is no array taking up 1.2 GB.
Title: Re: artificial neural networks | back propagation
Post by: Phil on July 13, 2018, 06:36:00 pm
I don't have a Python version of MNIST, just the Swift version to check my Pascal version's results against. I would guess that almost all processing takes place within the TensorFlow library, not in the calling, so choice of language shouldn't matter much for this example (mostly interative execution of tensor operations by TensorFlow, not much really going on in the Swift or Pascal code).

Tested MNIST.swift and MNIST.pas on Ubuntu in a 1GB VM with 1 core. Both ran with very similar times, as expected. Interesting that TensorFlow can do this in so little memory, considering a 47MB input data file.

Using watch, memory footprint seems a little higher with the Pascal version - not sure why that would be.

Now we just need a non-TensorFlow version of MNIST to test against.

Note I fixed a memory leak in TF.pas, so if you're working with the Pascal interface, be sure to download the latest code:

https://macpgmr.github.io/MacXPlatform/PascalForTensorFlow.html

Also tested against the new 1.9 version of TensorFlow.
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 13, 2018, 08:31:25 pm
mw108, it is already dynamic. There is no array taking up 1.2 GB.

Hm ok. Then I obviously had a misconception.

If you change TNeuralFloat = Single to TNeuralFloat = Double, you can't compile CAI anymore, because the compiler says that TNeuralFloatArr is too large, because it exeeds the prescribed 2GB memory limit. You have to reduce the size of TNeuralFloatArr = array[0..300000000] of TNeuralFloat so that i compiles again.

If you calculate the size of the array using Single type you get 1.2GB and I assumed that 2GB is the (heap?) memory limit for the whole app, leaving you with only 800MB to work with. But I see that it is obviously only per type / variable. I just tested it and I can create as many similar arrays I like:

Code: Pascal  [Select][+][-]
  1. type
  2.   TNeuralFloatArr = array[0..300000000] of TNeuralFloat;
  3.   TNeuralFloatArr2 = array[0..300000000] of TNeuralFloat;
  4.   TNeuralFloatArr3 = array[0..300000000] of TNeuralFloat;
  5.   TNeuralFloatArr4 = array[0..300000000] of TNeuralFloat;
  6.   TNeuralFloatArr5 = array[0..300000000] of TNeuralFloat;
  7.   TNeuralFloatArr6 = array[0..300000000] of TNeuralFloat;
  8.  

Long story short: All good. Thanks for the clarification. Forget what I said.  :D
Title: Re: artificial neural networks | back propagation
Post by: Phil on July 13, 2018, 10:08:32 pm
Long story short: All good. Thanks for the clarification. Forget what I said.  :D

The compiler doesn't know that the full 300000000 items of Double is going to be allocated or not and so, under 32-bits, it considers that an error. Try it under 64-bit - it should compile okay.

Again, dynamic arrays probably a better approach. With the current approach, if you allocate less than the array type's maximum size, you lose normal range-checking above the allocated upper limit.
Title: Re: artificial neural networks | back propagation
Post by: Phil on July 15, 2018, 07:01:31 pm
This article should be of interest to anyone reading this thread:

https://www.tmssoftware.com/site/blog.asp?post=466

Interesting approach that TMS uses, putting vector and matrix expressions in a string that they then evaluate. This makes for very concise user code and allows them to support their own vector and matrix operators (eg, x for cross product).
Title: Re: artificial neural networks | back propagation
Post by: Thaddy on July 15, 2018, 09:50:54 pm
Although that is a smart approach it is not a fast approach.
Usually speed is here the foremost importance. String manipulation does not help here.
Title: Re: artificial neural networks | back propagation
Post by: Phil on July 15, 2018, 09:53:16 pm
Although that is a smart approach it is not a fast approach.
Usually speed is here the foremost importance. String manipulation does not help here.

I don't see how that would make any measurable difference. Most of the time will be spent doing the actual number crunching, not parsing the string containing the expression.
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 26, 2018, 05:43:04 pm
Now we just need a non-TensorFlow version of MNIST to test against.

I did a MNIST implementation using the CAI framework.

The NN layout is based on one of the CAI CIFAR learning applications, inspired by TF.

Since I'm relatively new in the whole ANN area, I honestly have no idea if I did this correctly. So bear with me. :D

The recognition rate of the test program is pretty good. However, it has major problems recognizing a 9, it always recognizes it as a 3. Don't know why.

You can find everything in this repo. Also a pretrained NN structure and weights file. You still need the CAI framework and the MNIST dataset though.

https://bitbucket.org/108bits/cai-implementations/src/64d3b892d64cb2ca231a7df37ed5d0888ae924c4/MNIST/?at=master

Title: Re: artificial neural networks | back propagation
Post by: Phil on July 26, 2018, 07:11:00 pm
Now we just need a non-TensorFlow version of MNIST to test against.

I did a MNIST implementation using the CAI framework.

The NN layout is based on one of the CAI CIFAR learning applications, inspired by TF.

Maybe I'm not reading your results correctly. But it looks like 20 iterations through the training loop took 2850 seconds. Is that correct? If so, that would seem very slow. MSIST.pas based on TensorFlow took a minute or less depending on system.
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 26, 2018, 07:21:28 pm
The NN reached 99% accuracy after epoch #13 in ~30 minutes, respectively 99% on the test dataset after epoch #15 and 37min. Just didn't see it earlier to cancel the process. One epoch iterates through all 60.000 MNIST images.

Unfortunately I don't know how TF or your implementation works or what NN layout you used, but learning 60.000 images and reaching 99% accuracy on the 10.000 images training data set all in less than minute seems a bit unrealistic?
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 26, 2018, 07:36:55 pm
Ok, did some read ups and it seems that a 128 batch indeed completes in approx. 0.5sec in TF.

https://stackoverflow.com/questions/48035125/speed-of-logistic-regression-on-mnist-with-tensorflow

Thats really fast, yea. And CAI very slow, compared to that. But then in the end, it is still WIP. Maybe João can get it faster over time.

And I personally don't understand TF at all. Imho it is very complicated for beginners to get a hold on it.
Title: Re: artificial neural networks | back propagation
Post by: Phil on July 26, 2018, 08:01:48 pm
Ok, did some read ups and it seems that a 128 batch indeed completes in approx. 0.5sec in TF.

https://stackoverflow.com/questions/48035125/speed-of-logistic-regression-on-mnist-with-tensorflow

Thats really fast, yea. And CAI very slow, compared to that. But then in the end, it is still WIP. Maybe João can get it faster over time.

And I personally don't understand TF at all. Imho it is very complicated for beginners to get a hold on it.

Yes, TensorFlow-based MNIST.pas 20 iterations through training loop took 12 to 60 seconds, depending on system.

I think TensorFlow is actually easier to understand since it has a lot of related documentation. For example, to understand MNINST.pas or the MNINST.swift program it was based on, see Part 5 here:

https://towardsdatascience.com/machine-learning-with-swift-for-tensorflow-9167df128912?gi=aadadd2fbc78

Obviously speed is important so I don't see any way that I can use CAI. And of course since CAI has a GPL license, I've already had to rule it out. I can't use GPL code in my apps.
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 26, 2018, 08:19:46 pm
https://towardsdatascience.com/machine-learning-with-swift-for-tensorflow-9167df128912?gi=aadadd2fbc78
Ok, I will check that out. Thanks. :)
Title: Re: artificial neural networks | back propagation
Post by: schuler on July 26, 2018, 09:29:40 pm
mw108,
I'm testing your code now. Deep congrats for your effort. In regards to speed, CAI bench marking with CIFAR10 is impressive. I'll update your code and send it back for your review.
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 26, 2018, 10:29:17 pm
Great. Looking forward your results. :)
Title: Re: artificial neural networks | back propagation
Post by: schuler on July 26, 2018, 10:58:39 pm
 :) Hello :)
Really enjoyed looking at your code.

My first suggestion is using this NN structure (unorthodox and efficient) :
Code: Pascal  [Select][+][-]
  1.       Input := FNN.AddLayer(TNNetInput.Create(FNNConfig.Width, FNNConfig.Height, FNNConfig.Depth));
  2.       FNN.AddLayer(TNNetConvolutionReLU.Create(16, 5, 0, 1));
  3.       FNN.AddLayer(TNNetMaxPool.Create(4));
  4.       FNN.AddLayer(TNNetConvolutionReLU.Create(64, 3, 1, 1));
  5.       FNN.AddLayer(TNNetConvolutionReLU.Create(64, 3, 1, 1));
  6.       FNN.AddLayer(TNNetFullConnectReLU.Create(64));
  7.       FNN.AddLayer(TNNetFullConnectReLU.Create(32));
  8.       FNN.AddLayer(TNNetFullConnectLinear.Create(FNNConfig.NumClasses));
  9.       FNN.AddLayer(TNNetSoftMax.Create());
  10.  

Then, use a bigger learning rate and smoother learning rate:
Code: Pascal  [Select][+][-]
  1.   LearningRate := 0.001;
  2.   MinLearningRate := 0.00001;
  3.   LearningRateDecay := 0.99;
  4.   StaircaseEpochs := 1;
  5.  

You can compare attachments to your code and apply changes you like. Made the saving of NN less hungry. On bigger machines (such as with 64 and 96 cores), nn saving might be too intensive. I might benchmark along weekend on a high core count computer.

On my dual core notebook, each epoch takes from 50 seconds to 65 seconds (no video card, no opencl).

About benchmarking, we can't compare 3 convolutional layers + 3 FC layers with "simpler methods".

Have a look at how the first epoch goes in the attached image (96% accuracy).

BTW, really well done. Congrats.
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 26, 2018, 11:38:38 pm
Wow. Thats really a huge improvement! :o Really glad I did something wrong and it wasn't the fault of CAI. :D

On my machine (i7 4GHz, 32GB RAM) one epoch finishes in about 25sec now, with one batch taking about 0.3s (Fwd + Backprop), using no OpenCL.

The test target error rate of 0.5 was reached after the 7th epoch after 166s (~2.5min).

I updated the Repo with the changes. Also added the new pretrained model and the protocol. :)
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 27, 2018, 12:11:01 am
Small fix in the Repo: Forgot to set the new LearningRateDecay to 0.99. Was still at 0.01.

Result: The test target error rate of <= 0.05 is now already reached almost after the 2nd epoch (0.54). After the 3rd epoch it is 0.34.  :o
Title: Re: artificial neural networks | back propagation
Post by: mw108 on July 27, 2018, 09:09:19 am
Small update to the repo again.

João provided a more reduced NN layout, which finishes a batch in 0.10 to 0.15s and an epoch in ~10s. :)
Title: Re: artificial neural networks | back propagation
Post by: schuler on July 27, 2018, 09:23:27 am
mw108, love seeing your results!

To whom has no idea about what you are talking about, decided to share some links here:
https://en.wikipedia.org/wiki/MNIST_database
https://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html
TinyPortal © 2005-2018