Search Results

Search found 4319 results on 173 pages for 'native'.

Page 2/173 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Turning .NET executable into native executable

    - by lmsasu
    Hello all, Is there any approach to convert an application developed in .NET into a native executable (sources are included)? Installing the whole framework (up to .NET Framework 3.5 SP1) takes a lot of time - not always the computers are updated from the internet. Is it possible to call NGen in order to produce independent executables? Thanks

    Read the article

  • Building DLL via Maven with mojo-native

    - by graham.reeds
    I can build a simple dll consisting of a source file, a header file and a definition but now I am progressing beyond a simple toy dll and working towards something more complex. The DLL I am trying to compile has 2 source files, 2 headers and the dreaded stdafx pair. To compile normally you would use /Yc to compile the pch and /Yu to use it. How do you specify that with in the constraints of mojo-native's compiler options?

    Read the article

  • Building a DLL via Maven with mojo-native

    - by graham.reeds
    I can build a simple dll consisting of a source file, a header file and a definition but now I am progressing beyond a simple toy dll and working towards something more real (ie: more complex). The DLL I am trying to compile has 2 source files, 2 headers and the dreaded stdafx pair. To compile normally you would use /Yc for the pch and /Yu to use it. How do you specify that with in the constraints of mojo-native's compiler options?

    Read the article

  • Native SQL - How to set the schema and database names

    - by icelobber
    Im using Native SQL from ABAP language. The query to get data is something like this SELECT COUNT(ROWID) FROM <SCHEMANAME>.<TABLENAME>;@<DATABASENAME> INTO :localvariable I want to somehow set the schemaname and database name as default so that i do not need to use them in the SELECTs later. Then i can only use the table name in the SELECT. thanks !!

    Read the article

  • JNI problem when calling a native library that loads another native library

    - by TheEnemyOfQuality
    I've got a bit of an odd problem. I have a project in C++ that's basically a wrapper for a third party DLL like this: MyLibrary --loads DLL_A ----loads DLL_B I load DLL_A with LoadLibrary(), wrap several of its functions and generate my own DLL. I've tested this in a C++ project and a C# project. Both do everything they're supposed to do: load DLL_A, make a couple of function calls, and indirectly load DLL_B. The problem is when I build a DLL for java and make the calls through JNI. Everything runs like it should (no java.lang.UnsatisfiedLinkError), but when it comes time for DLL_A to load DLL_B it doesn't work. From debugging, the loading of DLL_B happens on a function call in DLL_A that takes a callback. When called from Java, this function call seems to fail (the function pointer is fine and the actual call goes off without a hitch), and I get an odd pop-up window saying DLL_B failed to load, and my program is left waiting for a callback that never happens. I can explicitly load DLL_B just fine (both from Java and from C++) and I've checked every possible path, path variable, and tried placing the dlls everywhere to see if it could be looking somewhere funny. I'm pretty sure it's not a path problem. Ultimately I don't know how DLL_A is loading DLL_B and I can't figure out why everything works fine in C++ and C#, but not in Java. I'm absolutely flummoxed. It could still be something specific to my setup (although I've looked as hard as I can look), but I'm throwing this scenario out there to see if anyone has run into a similar problem. -Dave

    Read the article

  • Java native methods issues with SUN JVM (jdk1.5.0_14) and multi-core CPU’s

    - by Mattias Arnersten
    We are hosting an application on SUN JVM that handles a lot of XML parsing using Jaxb. The application is parsing the XML fine using JRockit 5 but when using the SUN JVM the JVM spends a majority of it’s time on native methods such as java-lang.System.arraycopy, java.lang.String.intern and java.lang.ClassLoader.getPackage. The CPU load is approx. 60% higher when using SUN JVM compared with JRockit. Even stranger is that when we only run the application server using one core (in WMWare) the problem disappears. Has anyone experienced the same behavior? Mattias Arnersten

    Read the article

  • Load only some columns with Hibernate native SQL queries

    - by Alessandro Dionisi
    I have a table on the database and I want to load only some columns from the result set. I defined a native sql query in the hbm file: <sql-query name="query"> <return alias="r" class="RawData"/> <![CDATA[ SELECT DESCRIPTION as {r.description} FROM RAWD_RAWDATAS r WHERE r.RAWDATA_ID=? ]]> </sql-query> This query however fails with error: could not read column value from result set: RAWDATA1_14_0_; Invalid column name SQL Error: 17006, SQLState: null, because Hibernate tries to load all fields from the result set. I found also a bug in Hibernate JIRA (http://opensource.atlassian.com/projects/hibernate/browse/HHH-3035). Anyone knows how to accomplish this task with a workaround?

    Read the article

  • Running a Java program with a .dll from Adobe AIR's native process

    - by Donny
    I would like to be able to operate a scanner from my AIR application. Since there's no support for this natively, I'm trying to use the NativeProcess class to start a jar file that can run the scanner. The Java code is using the JTwain library to operate the scanner. The Java application runs fine by itself, and the AIR application can start and communicate with the Java application. The problem seems to be that any time I attempt to use a function from JTwain (which relies on the JTwain.dll), the application dies IF AIR STARTED IT. I'm not sure if there's some limit about referencing dll files from the native process or what. I've included my code below Java code- while(true) { try { System.out.println("Start"); text = in.readLine(); Source source = SourceManager.instance().getCurrentSource(); System.out.println("Java says: "+ text); } catch (IOException e) { System.err.println("Exception while reading the input. " + e); } catch (Exception e) { System.out.println("Other exception occured: " + e.toString()); } finally { } } } Air application- import mx.events.FlexEvent; private var nativeProcess:NativeProcess; private var npInfo:NativeProcessStartupInfo; private var processBuffer:ByteArray; private var bLength:int = 0; protected function windowedapplication1_applicationCompleteHandler(event:FlexEvent):void { var arg:Vector.<String> = new Vector.<String>; arg.push("-jar"); arg.push(File.applicationDirectory.resolvePath("Hello2.jar").nativePath); processBuffer = new ByteArray; npInfo = new NativeProcessStartupInfo; npInfo.executable = new File("C:/Program Files/Java/jre6/bin/javaw.exe"); npInfo.arguments = arg; nativeProcess = new NativeProcess; nativeProcess.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, onStandardOutputData); nativeProcess.start(npInfo); } private function onStandardOutputData(e:ProgressEvent):void { tArea.text += nativeProcess.standardOutput.readUTFBytes(nativeProcess.standardOutput.bytesAvailable); } protected function button1_clickHandler(event:MouseEvent):void { tArea.text += 'AIR app: '+tInput.text + '\n'; nativeProcess.standardInput.writeMultiByte(tInput.text + "\n", 'utf-8'); tInput.text = ''; } protected function windowedapplication1_closeHandler(event:Event):void { nativeProcess.closeInput(); } ]]> </fx:Script> <s:Button label="Send" x="221" y="11" click="button1_clickHandler(event)"/> <s:TextInput id="tInput" x="10" y="10" width="203"/> <s:TextArea id="tArea" x="10" width="282" height="88" top="40"/> I would love some explanation about why this is dying. I've done enough testing that I know absolutely that the line that kills it is the SourceManager.instance().getCurrentSource(). I would love any suggestions. Thanks.

    Read the article

  • Marshalling to a native library in C#

    - by Daniel Baulig
    I'm having trouble calling functions of a native library from within managed C# code. I am developing for the 3.5 compact framework (Windows Mobile 6.x) just in case this would make any difference. I am working with the waveIn* functions from coredll.dll (these are in winmm.dll in regular Windows I believe). This is what I came up with: // namespace winmm; class winmm [StructLayout(LayoutKind.Sequential)] public struct WAVEFORMAT { public ushort wFormatTag; public ushort nChannels; public uint nSamplesPerSec; public uint nAvgBytesPerSec; public ushort nBlockAlign; public ushort wBitsPerSample; public ushort cbSize; } [StructLayout(LayoutKind.Sequential)] public struct WAVEHDR { public IntPtr lpData; public uint dwBufferLength; public uint dwBytesRecorded; public IntPtr dwUser; public uint dwFlags; public uint dwLoops; public IntPtr lpNext; public IntPtr reserved; } public delegate void AudioRecordingDelegate(IntPtr deviceHandle, uint message, IntPtr instance, ref WAVEHDR wavehdr, IntPtr reserved2); [DllImport("coredll.dll")] public static extern int waveInAddBuffer(IntPtr hWaveIn, ref WAVEHDR lpWaveHdr, uint cWaveHdrSize); [DllImport("coredll.dll")] public static extern int waveInPrepareHeader(IntPtr hWaveIn, ref WAVEHDR lpWaveHdr, uint Size); [DllImport("coredll.dll")] public static extern int waveInStart(IntPtr hWaveIn); // some other class private WinMM.WinMM.AudioRecordingDelegate waveIn; private IntPtr handle; private uint bufferLength; private void setupBuffer() { byte[] buffer = new byte[bufferLength]; GCHandle bufferPin = GCHandle.Alloc(buffer, GCHandleType.Pinned); WinMM.WinMM.WAVEHDR hdr = new WinMM.WinMM.WAVEHDR(); hdr.lpData = bufferPin.AddrOfPinnedObject(); hdr.dwBufferLength = this.bufferLength; hdr.dwFlags = 0; int i = WinMM.WinMM.waveInPrepareHeader(this.handle, ref hdr, Convert.ToUInt32(Marshal.SizeOf(hdr))); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInPrepare"; return; } i = WinMM.WinMM.waveInAddBuffer(this.handle, ref hdr, Convert.ToUInt32(Marshal.SizeOf(hdr))); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInAddrBuffer"; return; } } private void setupWaveIn() { WinMM.WinMM.WAVEFORMAT format = new WinMM.WinMM.WAVEFORMAT(); format.wFormatTag = WinMM.WinMM.WAVE_FORMAT_PCM; format.nChannels = 1; format.nSamplesPerSec = 8000; format.wBitsPerSample = 8; format.nBlockAlign = Convert.ToUInt16(format.nChannels * format.wBitsPerSample); format.nAvgBytesPerSec = format.nSamplesPerSec * format.nBlockAlign; this.bufferLength = format.nAvgBytesPerSec; format.cbSize = 0; int i = WinMM.WinMM.waveInOpen(out this.handle, WinMM.WinMM.WAVE_MAPPER, ref format, Marshal.GetFunctionPointerForDelegate(waveIn), 0, WinMM.WinMM.CALLBACK_FUNCTION); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInOpen"; return; } setupBuffer(); WinMM.WinMM.waveInStart(this.handle); } I read alot about marshalling the last few days, nevertheless I do not get this code working. When my callback function is called (waveIn) when the buffer is full, the hdr structure passed back in wavehdr is obviously corrupted. Here is an examlpe of how the structure looks like at that point: - wavehdr {WinMM.WinMM.WAVEHDR} WinMM.WinMM.WAVEHDR dwBufferLength 0x19904c00 uint dwBytesRecorded 0x0000fa00 uint dwFlags 0x00000003 uint dwLoops 0x1990f6a4 uint + dwUser 0x00000000 System.IntPtr + lpData 0x00000000 System.IntPtr + lpNext 0x00000000 System.IntPtr + reserved 0x7c07c9a0 System.IntPtr This obiously is not what I expected to get passed. I am clearly concerned about the order of the fields in the view. I do not know if Visual Studio .NET cares about actual memory order when displaying the record in the "local"-view, but they are obviously not displayed in the order I speciefied in the struct. Then theres no data pointer and the bufferLength field is far to high. Interestingly the bytesRecorded field is exactly 64000 - bufferLength and bytesRecorded I'd expect both to be 64000 though. I do not know what exactly is going wrong, maybe someone can help me out on this. I'm an absolute noob to managed code programming and marshalling so please don't be too harsh to me for all the stupid things I've propably done. Oh here's the C code definition for WAVEHDR which I found here, I believe I might have done something wrong in the C# struct definition: /* wave data block header */ typedef struct wavehdr_tag { LPSTR lpData; /* pointer to locked data buffer */ DWORD dwBufferLength; /* length of data buffer */ DWORD dwBytesRecorded; /* used for input only */ DWORD_PTR dwUser; /* for client's use */ DWORD dwFlags; /* assorted flags (see defines) */ DWORD dwLoops; /* loop control counter */ struct wavehdr_tag FAR *lpNext; /* reserved for driver */ DWORD_PTR reserved; /* reserved for driver */ } WAVEHDR, *PWAVEHDR, NEAR *NPWAVEHDR, FAR *LPWAVEHDR; If you are used to work with all those low level tools like pointer-arithmetic, casts, etc starting writing managed code is a pain in the ass. It's like trying to learn how to swim with your hands tied on your back. Some things I tried (to no effect): .NET compact framework does not seem to support the Pack = 2^x directive in [StructLayout]. I tried [StructLayout(LayoutKind.Explicit)] and used 4 bytes and 8 bytes alignment. 4 bytes alignmentgave me the same result as the above code and 8 bytes alignment only made things worse - but that's what I expected. Interestingly if I move the code from setupBuffer into the setupWaveIn and do not declare the GCHandle in the context of the class but in a local context of setupWaveIn the struct returned by the callback function does not seem to be corrupted. I am not sure however why this is the case and how I can use this knowledge to fix my code. I'd really appreciate any good links on marshalling, calling unmanaged code from C#, etc. Then I'd be very happy if someone could point out my mistakes. What am I doing wrong? Why do I not get what I'd expect.

    Read the article

  • EPM 11.1.2 - In WebLogic Server, Enable Native IO Performance Pack

    - by Ahmed Awan
    Performance can be improved by enabling native IO in production mode. WebLogic Server benchmarks show major performance improvements when native performance packs are used on machines that host Oracle WebLogic Server instances. Important Note:  Always enable native I/O, if available, and check for errors at startup to make sure it is being initialed properly. Tip: The use of NATIVE performance packs are enabled by default in the configuration shipped with your distribution. You can use the Administration Console to verify that performance packs are enabled by clicking on each managed server and click on Tuning tab.

    Read the article

  • Google I/O 2012 - Native Client LIVE

    Google I/O 2012 - Native Client LIVE Colton McAnlis, Noel Allen In this talk, we will be porting an application to Native Client in 60 minutes, LIVE; showing the power of what Native Client can provide for traditional C++ developers looking to move to the web. In the porting process we'll cover specific tasks that a developer would need to perform during a port, and how to to address them with new tools and technologies including debugging integration with Visual Studio and a set of newly added utility libraries to the SDK. Attendees to this session will walk away with a clear understanding of what's required to port their applications to Native Client so that they can start their own projects For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 16 0 ratings Time: 48:21 More in Science & Technology

    Read the article

  • Introducing the Native Client SDK

    Introducing the Native Client SDK Henry Bridge, product manager for Native Client introduces the developer preview of Native Client's sdk. For more information go to code.google.com From: GoogleDevelopers Views: 5 0 ratings Time: 05:00 More in Science & Technology

    Read the article

  • IIS7 Modules - managed or native?

    - by Simon Linder
    Hi all, as the old ISAPI filters are going to die sooner or later, I want to rewrite an old ISAPI filter that was used in IIS 6 into a module for use in IIS 7. The module will be used globally, meaning it will be used within each site, on a Windows Server 2008 R2 with IIS 7.5 installed, that will host several thousand web sites and managing about 50 application pools. My question now is if I should write that module in managed or unmanaged code? One of my concerns regarding managed code is the massive memory consumption due to the .NET framework overhead. I don't know how this would effect the server's performance. I already wrote modules in managed as well as in unmanaged code. So this is not the bothering my decision. But I would prefer to write the module in C# if there are no huge drawbacks. Any suggestions about that issue?

    Read the article

  • simulate what native object is not exist

    - by Naitro
    Here is the situation: I have checking on existing class like: ('Promise' in window) // true/false` And I wanna force return false or true on it, can I do it? Yes, I can check it by some other way, like ` window.Promise = undefined; window.Promise === undefined; Or something like this, but can I somehow delete this object or simulate something for 'in' operator? I check specification and v8 code, there is 'in' operator just call 'HasProperty' operator, which realization on c++.. I know 'hack' with faking toString/valueOf methods: obj = { toString: function(){ return 'myName'; } }, obj2 = {}; obj2[obj] = 1; // Object {myName: 1} May be I can use it in some way? But, as I send string 'Promise' I can't just fake it like this way.. may be exist some way to fake 'HasProperty'?

    Read the article

  • Advice on whether to use native C++ DLL or not: PINVOKE & Marshaling ?

    - by Bob
    What's the best way to do this....? I have some Native C++ code that uses a lot of Win32 calls together with byte buffers (allocated using HeapAlloc). I'd like to extend the code and make a C# GUI...and maybe later use a basic Win32 GUI (for use where there is no .Net and limited MFC support). (A) I could just re-write the code in C# and use multiple PINVOKEs....but even with the PINVOKES in a separate class, the code looks messy with all the marshaling. I'm also re-writing a lot of code. (B) I could create a native C++ DLL and use PINVOKE to marshal in the native data structures. I'm assuming I can include the native C++ DLL/LIB in a project using C#? (C) Create a mixed mode DLL (Native C++ class plus managed ref class). I'm assuming that this would make it easier to use the managed ref class in C#......but is this the case? Will the managed class handle all the marshaling? Can I use this mixed mode DLL on a platform with no .Net (i.e. still access the native C++ unmanaged component) or do I limit myself to .Net only platforms. One thing that bothers me about each of these options is all the marshalling. Is it better to create a managed data structure (array, string etc.) and pass that to the native C++ class, or, the other way around? Any ideas on what would be considered best practice...?

    Read the article

  • How to: Show wait cursor in managed and native code

    - by TechTwaddle
    Someone on the MSDN forum asked about how to show a wait cursor, like when your application is loading or performing some (background) task. It’s pretty simple to show the wait cursor in both managed and native code, and in this post we will see just how. Managed Code (C#) Set Cursor.Current to Cursors.WaitCursor, and call Cursor.Show(). And to come back to normal cursor, set Cursor.Current to Cursors.Default and call Show() again. Below is a button handler for a sample app that I made, (watch the video below) private void button1_Click(object sender, EventArgs e) {     lblProgress.Text = "Downloading ether...";     lblProgress.Update();     Cursor.Current = Cursors.WaitCursor;     Cursor.Show();     //do some processing     for (int i = 0; i < 50; i++)     {         progressBar1.Value = 2 * (i + 1);         Thread.Sleep(100);     }     Cursor.Current = Cursors.Default;     Cursor.Show();     lblProgress.Text = "Download complete.";     lblProgress.Update(); }   Native Code In native code, call SetCursor(LoadCursor(NULL, IDC_WAIT)); to show the wait cursor; and SetCursor(LoadCursor(NULL, IDC_ARROW)); to come back to normal. The same button handler for native version of the app is below, case IDC_BUTTON_DOWNLOAD:     {         HWND temp;         temp = GetDlgItem(hDlg, IDC_STATIC_PROGRESS);         SetWindowText(temp, L"Downloading ether...");         UpdateWindow(temp);         SetCursor(LoadCursor(NULL, IDC_WAIT));         temp = GetDlgItem(hDlg, IDC_PROGRESSBAR);         for (int i=0; i<50; i++)         {             SendMessage(temp, PBM_SETPOS, (i+1)*2, 0);             Sleep(100);         }         SetCursor(LoadCursor(NULL, IDC_ARROW));         temp = GetDlgItem(hDlg, IDC_STATIC_PROGRESS);         SetWindowText(temp, L"Download Complete.");         UpdateWindow(temp);     }     break; Here is a video of the sample app running. First the managed version is deployed and the native version next,

    Read the article

  • Script language native extensions - avoiding name collisions and cluttering others' namespace

    - by H2CO3
    I have developed a small scripting language and I've just started writing the very first native library bindings. This is practically the first time I'm writing a native extension to a script language, so I've run into a conceptual issue. I'd like to write glue code for popular libraries so that they can be used from this language, and because of the design of the engine I've written, this is achieved using an array of C structs describing the function name visible by the virtual machine, along with a function pointer. Thus, a native binding is really just a global array variable, and now I must obviously give it a (preferably good) name. In C, it's idiomatic to put one's own functions in a "namespace" by prepending a custom prefix to function names, as in myscript_parse_source() or myscript_run_bytecode(). The custom name shall ideally describe the name of the library which it is part of. Here arises the confusion. Let's say I'm writing a binding for libcURL. In this case, it seems reasonable to call my extension library curl_myscript_binding, like this: MYSCRIPT_API const MyScriptExtFunc curl_myscript_lib[10]; But now this collides with the curl namespace. (I have even thought about calling it curlmyscript_lib but unfortunately, libcURL does not exclusively use the curl_ prefix -- the public APIs contain macros like CURLCODE_* and CURLOPT_*, so I assume this would clutter the namespace as well.) Another option would be to declare it as myscript_curl_lib, but that's good only as long as I'm the only one who writes bindings (since I know what I am doing with my namespace). As soon as other contributors start to add their own native bindings, they now clutter the myscript namespace. (I've done some research, and it seems that for example the Perl cURL binding follows this pattern. Not sure what I should think about that...) So how do you suggest I name my variables? Are there any general guidelines that should be followed?

    Read the article

  • Make a compiled binary run at native speed flawlessly without recompiling from source on a another system?

    - by unknownthreat
    I know that many people, at a first glance of the question, may immediately yell out "Java", but no, I know Java's qualities. Allow me to elaborate my question first. Normally, when we want our program to run at a native speed on a system, whether it be Windows, Mac OS X, or Linux, we need to compile from source codes. If you want to run a program of another system in your system, you need to use a virtual machine or an emulator. While these tools allow you to use the program you need on the non-native OS, they sometimes have problems of performance and glitches. We also have a newer compiler called "JIT Compiler", where the compiler will parse the bytecode program to native machine language before execution. The performance may increase to a very good extent with JIT Compiler, but the performance is still not the same as running it on a native system. Another program on Linux, WINE, is also a good tool for running Windows program on Linux system. I have tried running Team Fortress 2 on it, and tried experiment with some settings. I got ~40 fps on Windows at its mid-high setting on 1280 x 1024. On Linux, I need to turn everything low at 1280 x 1024 to get ~40 fps. There are 2 notable things though: Polygon model settings do not seem to affect framerate whether I set it low or high. When there are post-processing effects or some special effects that require manipulation of drawn pixels of the current frame, the framerate will drop to 10-20 fps. From this point, I can see that normal polygon rendering is just fine, but when it comes to newer rendering methods that requires graphic card to the job, it slows down to a crawl. Anyway, this question is rather theoretical. Is there anything we can do at all? I see that WINE can run STEAM and Team Fortress 2. Although there are flaws, they can run at lower setting. Or perhaps, I should also ask, "is it possible to translate one whole program on a system to another system without recompiling from source and get native speed?" I see that we also have AOT Compiler, is it possible to use it for something like this? Or there are so many constraints (such as DirectX call or differences in software architecture) that make it impossible to have a flawless and not native to the system program that runs at native speed?

    Read the article

  • Chrome Apps + Native Client

    Chrome Apps + Native Client Did you know that you can use Native Client inside a Chrome App? Join +John Mccutchan and +Pete LePage as they introduce Native Client Acceleration Modules (NaCl AM) which expose C++ libraries to JavaScript programs. NaCl AMs can, be used for bulk data processing (compression, encryption) but they also work well in interactive applications that require low latency. We'll explain how to build a NaCl Acceleration Module and demo a Bullet Physics engine running inside a Chrome App with a NaCl AM interacting with an HTML and JavaScript UI using three.js. We'll be live on Tuesday December 11th at at 9am PT, showing you code, samples and answering your questions. From: GoogleDevelopers Views: 0 0 ratings Time: 45:00 More in Science & Technology

    Read the article

  • HTML5 mobile game development vs. native game apps

    - by Vic Szpilman
    What is the current state of game engines, frameworks, libraries and conversions related to the HTML5 set of technologies (including CSS3 and JavaScript libraries such as RaphaelJS, Impact, gameQuery); and how does the best of that compare with developing a native app (especially for iOS and Android)? Especially in terms of performance, visuals and getting that "native feel". Thoughts on solutions such as Appcelerator and Corona SDK are also appreciated. In regards to Unity3D, is it possible to develop in it and still have the game be playable on a browser (such as current releases of Chrome or Firefox, at least) without any dependencies or having the user install anything (no unity web player). What I'm looking for is how to develop in web standards as to reach the maximum number of platforms (including outside mobile) while still retaining a native experience for mobile without having to implement the game anew for iOS and Android.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >