Search Results

Search found 7940 results on 318 pages for 'intel wireless'.

Page 302/318 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Compile and optimize for different target architectures

    - by Peter Smit
    Summary: I want to take advantage of compiler optimizations and processor instruction sets, but still have a portable application (running on different processors). Normally I could indeed compile 5 times and let the user choose the right one to run. My question is: how can I can automate this, so that the processor is detected at runtime and the right executable is executed without the user having to chose it? I have an application with a lot of low level math calculations. These calculations will typically run for a long time. I would like to take advantage of as much optimization as possible, preferably also of (not always supported) instruction sets. On the other hand I would like my application to be portable and easy to use (so I would not like to compile 5 different versions and let the user choose). Is there a possibility to compile 5 different versions of my code and run dynamically the most optimized version that's possible at execution time? With 5 different versions I mean with different instruction sets and different optimizations for processors. I don't care about the size of the application. At this moment I'm using gcc on Linux (my code is in C++), but I'm also interested in this for the Intel compiler and for the MinGW compiler for compilation to Windows. The executable doesn't have to be able to run on different OS'es, but ideally there would be something possible with automatically selecting 32 bit and 64 bit as well. Edit: Please give clear pointers how to do it, preferably with small code examples or links to explanations. From my point of view I need a super generic solution, which is applicable on any random C++ project I have later. Edit I assigned the bounty to ShuggyCoUk, he had a great number of pointers to look out for. I would have liked to split it between multiple answers but that is not possible. I'm not having this implemented yet, so the question is still 'open'! Please, still add and/or improve answers, even though there is no bounty to be given anymore. Thanks everybody!

    Read the article

  • Eclipse (Aptana) Typing Lag

    - by Zack
    Hello SO, I've been using Aptana for some time now, and as of recent I've been dealing with files that are really, really big (500+ lines of code, which is huge for me, being a novice developer). Whenever I deal with smaller files, I get that weird sensation that I'm "in front of" what's typing, but now I'm quite sure of it--there is a significant lag between when I type something and when I see the text appear on screen. I don't have this issue with Dreamweaver CS3, so I know my computer has the capability to edit these files without this happening, but Eclipse still lags. I also don't see when something is being deleted if I hold down backspace, I see the first few characters get deleted, but then everything "hangs." Once I release the backspace key, the characters that would've been shown deleting instantly vanish all at once. The same thing happens with the forward delete key. I'm beginning to think this is an issue with Java, since I have the same feeling that everything is slightly "behind me" when I'm using -any- Java application. The computer is an intel Pentium 4 3.2 GHz Prescott, with 2GB's of DDR400 RAM and a Radeon HD3650 graphics card. If anyone knows how to fix this lagging issue, I'm all ears (eyes?); if anyone can recommend a different IDE with capabilities similar to Aptana (I do Python, HTML, CSS and JS; I use Git for SCM), I'd be glad to give it a try. Thanks!

    Read the article

  • Python import error: Symbol not found, but the symbol is present in the file

    - by Autopulated
    I get this error when I try to import ssrc.spread: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ssrc/_spread.so, 2): Symbol not found: __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE The file in question (_spread.so) includes the symbol: $ nm _spread.so | grep _ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE (twice because the file is a fat ppc/x86 binary) The archive header information of _spread.so is: $ otool -fahv _spread.so Fat headers fat_magic FAT_MAGIC nfat_arch 2 architecture ppc7400 cputype CPU_TYPE_POWERPC cpusubtype CPU_SUBTYPE_POWERPC_7400 capabilities 0x0 offset 4096 size 235272 align 2^12 (4096) architecture i386 cputype CPU_TYPE_I386 cpusubtype CPU_SUBTYPE_I386_ALL capabilities 0x0 offset 241664 size 229360 align 2^12 (4096) /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ssrc/_spread.so (architecture ppc7400): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC PPC ppc7400 0x00 BUNDLE 10 1420 NOUNDEFS DYLDLINK BINDATLOAD TWOLEVEL WEAK_DEFINES BINDS_TO_WEAK /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ssrc/_spread.so (architecture i386): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC I386 ALL 0x00 BUNDLE 11 1604 NOUNDEFS DYLDLINK BINDATLOAD TWOLEVEL WEAK_DEFINES BINDS_TO_WEAK And my python is python 2.6.4: $ which python | xargs otool -fahv Fat headers fat_magic FAT_MAGIC nfat_arch 2 architecture ppc cputype CPU_TYPE_POWERPC cpusubtype CPU_SUBTYPE_POWERPC_ALL capabilities 0x0 offset 4096 size 9648 align 2^12 (4096) architecture i386 cputype CPU_TYPE_I386 cpusubtype CPU_SUBTYPE_I386_ALL capabilities 0x0 offset 16384 size 13176 align 2^12 (4096) /Library/Frameworks/Python.framework/Versions/2.6/bin/python (architecture ppc): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC PPC ALL 0x00 EXECUTE 11 1268 NOUNDEFS DYLDLINK TWOLEVEL /Library/Frameworks/Python.framework/Versions/2.6/bin/python (architecture i386): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC I386 ALL 0x00 EXECUTE 11 1044 NOUNDEFS DYLDLINK TWOLEVEL There seems to be a difference in the ppc architecture in the files, but I'm running on an intel, so I don't see why this should cause a problem. So why might the symbol not be found?

    Read the article

  • Problem installing mysql gem on Snow Leopard: uninitialized constant MysqlCompat::MysqlRes

    - by emson
    Hi All I've got a problem trying to install the Ruby mysql gem driver. I recently upgraded to Snow Leopard and did the Hivelogic manual install of MySQL. This all seems to work fine as I can access mysql from the command line and make changes to the database. My problem is that if I now use rake db:migrate I get: rake aborted! uninitialized constant MysqlCompat::MysqlRes (See full trace by running task with --trace) Now it appears that my mysql gem isn't working correctly as I can access MySQL fine from Python using the Python driver (which I compiled to). I therefore tried to rebuild the gem using the following command from this site: http://techliberty.blogspot.com/, (incidentally I am using a recent Intel MacBook Pro): sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config This compiles although I get No definition for the documentation: Building native extensions. This could take a while... Successfully installed mysql-2.8.1 1 gem installed Installing ri documentation for mysql-2.8.1... No definition for next_result No definition for field_name ... I'm a little stumped as my mysql_config is located in the correct place: /usr/local/mysql/bin/mysql_config And I have removed all other instances of the mysql gem, from my system. Any suggestions would be greatly appreciated. Many thanks. PS I saw this previous post http://stackoverflow.com/questions/1332207/uninitialized-constant-mysqlcompatmysqlres-using-mms2r-gem but it doesn't seem applicable for my version.

    Read the article

  • J2ME cache issue

    - by kiennt
    I have to write a J2ME app to retrieve images from server and display in mobile phone. I have seen and test that Snaptu have a mechanism to cache image, event with 100 images (both normal size and zoom size). I wonder how they can do that? I though that those guys use rms to save image stream to data. But when i check in working folder of simulater( I use Windows XP and Sun Wireless Toolkit 3.0, the Emulator device i use to run my program is CLDC Device 1 - my working folder is C:\Document And Settings\Administrator\javame-sdk\3.0\work\6\appdb), i see some .db file. When i delete these files, i still can view cache image in my emulator???? I also thought that those guys use heap memory to save image. But it is not correct because when i set limit device memory is 2MB (like some mobile phones), and i load and view 100 images in zoom size, it didn't make OutOfMemory Error? It so weird. Any one can help me? Thanks

    Read the article

  • C & MinGW: Hello World gives me the error "programm too big to fit in memory"

    - by user1692088
    I'm new here. Here's my problem: I installed MinGW on my Windows 7 Home Premium 32-bit Netbook with Intel Atom CPU N550, 1.50GHz and 2GB RAM. Now I made a file named hello.h and tried to compile it via CMD with the following command: "gcc c:\workspace\c\helloworld\hello.h -o out.exe" It compiles with no error, but when I try to run out.exe, it gives me following error: "program too big to fit in memory" Things I have checked: I have added "C:\MinGW\bin" to the Windows PATH Variable I have googled for about one hour, but ever since I'm a newbie, I can't really figure out what the problem is. I have compiled the same code on my 64-bit machine, compiles perfectly, but cannot be run due to 64-bit <- 16-bit problematic. I'd really appreciate, if someone could figure out, what the problem is. Btw, here's my hello.h: #include <stdio.h> int main(void){ printf("Hello, World\n"); } ... That's it. Thanks for your replies. Cheers, Boris

    Read the article

  • Performance of Java matrix math libraries?

    - by dfrankow
    We are computing something whose runtime is bound by matrix operations. (Some details below if interested.) This experience prompted the following question: Do folk have experience with the performance of Java libraries for matrix math (e.g., multiply, inverse, etc.)? For example: JAMA: http://math.nist.gov/javanumerics/jama/ COLT: http://acs.lbl.gov/~hoschek/colt/ Apache commons math: http://commons.apache.org/math/ I searched and found nothing. Details of our speed comparison: We are using Intel FORTRAN (ifort (IFORT) 10.1 20070913). We have reimplemented it in Java (1.6) using Apache commons math 1.2 matrix ops, and it agrees to all of its digits of accuracy. (We have reasons for wanting it in Java.) (Java doubles, Fortran real*8). Fortran: 6 minutes, Java 33 minutes, same machine. jvisualm profiling shows much time spent in RealMatrixImpl.{getEntry,isValidCoordinate} (which appear to be gone in unreleased Apache commons math 2.0, but 2.0 is no faster). Fortran is using Atlas BLAS routines (dpotrf, etc.). Obviously this could depend on our code in each language, but we believe most of the time is in equivalent matrix operations. In several other computations that do not involve libraries, Java has not been much slower, and sometimes much faster.

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • How can I create an executable to run on a certain processor architecture (instead of certain OS)?

    - by CrazyJugglerDrummer
    So I take my C++ program in Visual studio, compile, and it'll spit out a nice little EXE file. But EXEs will only run on windows, and I hear a lot about how C/C++ compiles into assembly language, which is runs directly on a processor. The EXE runs with the help of windows, or I could have a program that makes an executable that runs on a mac. But aren't I compiling C++ code into assembly language, which is processor specific? My Insights: I'm guessing I'm probably not. I know there's an Intel C++ compiler, so would it make processor-specific assembly code? EXEs run on windows, so they advantage of tons of things already set up, from graphics packages to the massive .NET framework. A processor-specific executable would be literally starting from scratch, with just the instruction set of the processor. Would this executable be a file-type? We could be running windows and open it, but then would control switch to processor only? I assume this executable would be something like an operating system, in that it would have to be run before anything else was booted up, and have only the processor instruction set to "use".

    Read the article

  • "ImportError: cannot import name Exchange" when using Celery w/ Kombu installed

    - by alukach
    I'm trying to create a Celery worker node on an Amazon EC2 Windows2008 R2 instance. Due to a plugin we require for another Python module, we are required to user Python3. I've installed Python3.2 and necessary modules and run Celery, but for some reason it states that it can't import Exchange and Queue from Kombu, despite the face that Kombu is installed and can be imported by python. Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\Administrator>celery Traceback (most recent call last): File "C:\Python32\Scripts\celery-script.py", line 9, in <module> load_entry_point('celery==3.0.11', 'console_scripts', 'celery')() File "C:\Python32\lib\site-packages\celery\__main__.py", line 13, in main from celery.bin.celery import main File "C:\Python32\lib\site-packages\celery\bin\celery.py", line 21, in <module> from celery.utils import term File "C:\Python32\lib\site-packages\celery\utils\__init__.py", line 22, in <module> from kombu import Exchange, Queue ImportError: cannot import name Exchange C:\Users\Administrator>python Python 3.2 (r32:88445, Feb 20 2011, 21:29:02) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from kombu import Exchange, Queue >>> print('WTF?!?') WTF?!? >>> This problem seems similar to this and this, but I have not been able to remedy the issue. Any ideas?

    Read the article

  • how can I parse and group/do stats on user agent strings?

    - by user151841
    I have a database that has the various user-agent strings of visitors to our site. I'd like to do a 'survey' of them to see what browsers our users are using, so that I can know what features I can use in future development. Is there a tool to parse and run statistics on user-agent strings, or a bunch of strings like this? Ideally, I'd like to see a hierarchical grouping of the stats. For instance: Opera/9.80 (Windows Mobile; WCE; Opera Mobi/WMD-50301; U; en) Presto/2.4.13 Version/10.00 Opera/9.80 (J2ME/MIDP; Opera Mini/4.2.14912/1280; U; en) Presto/2.2.0 Opera/9.80 (J2ME/MIDP; Opera Mini/4.2.13918/812; U; en) Presto/2.2.0 Opera/9.64 (Macintosh; Intel Mac OS X; U; en) Presto/2.1.1 Opera/9.60 (J2ME/MIDP; Opera Mini/4.2.13918/786; U; en) Presto/2.2.0 Opera/9.60 (J2ME/MIDP; Opera Mini/4.0.10992/432; U; en) Presto/2.2.0 I'd like to see 6 entries for Opera, broken down into 3 for 9.80, 1 for 9.64 and 2 for 9.60, and so forth for all browsers. Other dimensions, such as OS, would cross the boundaries of the browser version hierarchy, but it might be nice to see also.

    Read the article

  • How to secure login and member area with SSL certificate?

    - by citronas
    Background: I have a asp.net webapplication project that should contain a public and a member area. Now I want to implement a SSL decription to secure communication between the client and the server. (In the university we have a unsecured wireless network and you can use a wlan sniffer to read username/password. I do not want to have this security problem for my application, so I thought of a ssl decription) The application is running on a IIS 7.5. It it possible to have one webapp that has unsecured pages (like the public area) and a secured area (like the member area, which requires a login)? If yes, how can I relealise the communication between these too areas? Example: My webapp is hosted on http://foo.abc. I have pages like http://foo.abc/default.aspx and http://foo.abc/foo.aspx. In the same project is page like /member/default.aspx which is protected by a login on the page http://foo.abc/login.aspx. So I would need to implement SSL for the page /login.aspx and all pages in /member/ How can I do that? I just found out how to create SSL certificates in IIS 7.5 and how to add such a binding to a webapp. How how can I tell my webapp which page should be called with https and not with http. What is the best practise there?

    Read the article

  • Multiple elements with the same name with SimpleXML and Java

    - by LouieGeetoo
    I'm trying to use SimpleXML to parse an XML document (an ItemLookupResponse for a book from the Amazon Product Advertising API) which contains the following element: <ItemAttributes> <Author>Shane Conder</Author> <Author>Lauren Darcey</Author> <Manufacturer>Pearson Educacion</Manufacturer> <ProductGroup>Book</ProductGroup> <Title>Android Wireless Application Development: Barnes & Noble Special Edition</Title> </ItemAttributes> My problem is that I don't know how to deal with the multiple possible Author elements. Here's what I have right now for the corresponding POJO (Plain Old Java Object), keeping in mind that it's not handling the case of multiple Authors: @Element public class ItemAttributes { @Element public String Author; @Element public String Manufacturer; @Element public String Title; } (I don't care about the ProductGroup, so it's not in the class -- I'm just setting SimpleXML's strict mode to off to allow for that.) I couldn't find an example in the documentation that corresponded with such a case. Using an ElementList with (inline=true) seemed along the right lines, but I didn't see how to do it for String (as opposed to a separate Author class, which I have no need for and don't see how it would even work). Here's a similar question and answer, but for PHP: php - simpleXML how to access a specific element with the same name as others? I don't know what the Java equivalent would be to the accepted answer. Thanks in advance.

    Read the article

  • OpenGL Colour Interpolation

    - by Will-of-fortune
    I'm currently working on an little project in C++ and OpenGL and am trying to implement a colour selection tool similar to that in photoshop, as below. However I am having trouble with interpolation of the large square. Working on my desktop computer with a 8800 GTS the result was similar but the blending wasn't as smooth. This is the code I am using: GLfloat swatch[] = { 0,0,0, 1,1,1, mR,mG,mB, 0,0,0 }; GLint swatchVert[] = { 400,700, 400,500, 600,500, 600,700 }; glVertexPointer(2, GL_INT, 0, swatchVert); glColorPointer(3, GL_FLOAT, 0, swatch); glDrawArrays(GL_QUADS, 0, 4); Moving onto my laptop with Intel Graphics HD 3000, this result was even worse with no change in code. I thought it was OpenGL splitting the quad into two triangles, so I tried rendering using triangles and interpolating the colour in the middle of the square myself but it still doesnt quite match the result I was hoping for. Any help would be appreciated. Thanks.

    Read the article

  • how to implement a really efficient bitvector sorting in python

    - by xiao
    Hello guys! Actually this is an interesting topic from programming pearls, sorting 10 digits telephone numbers in a limited memory with an efficient algorithm. You can find the whole story here What I am interested in is just how fast the implementation could be in python. I have done a naive implementation with the module bitvector. The code is as following: from BitVector import BitVector import timeit import random import time import sys def sort(input_li): return sorted(input_li) def vec_sort(input_li): bv = BitVector( size = len(input_li) ) for i in input_li: bv[i] = 1 res_li = [] for i in range(len(bv)): if bv[i]: res_li.append(i) return res_li if __name__ == "__main__": test_data = range(int(sys.argv[1])) print 'test_data size is:', sys.argv[1] random.shuffle(test_data) start = time.time() sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "vec_sort function takes " + str(elapsed) I have tested from array size 100 to 10,000,000 in my macbook(2GHz Intel Core 2 Duo 2GB SDRAM), the result is as following: test_data size is: 1000 sort function takes 0.000274896621704 vec_sort function takes 0.00383687019348 test_data size is: 10000 sort function takes 0.00380706787109 vec_sort function takes 0.0371489524841 test_data size is: 100000 sort function takes 0.0520560741425 vec_sort function takes 0.374383926392 test_data size is: 1000000 sort function takes 0.867373943329 vec_sort function takes 3.80475401878 test_data size is: 10000000 sort function takes 12.9204008579 vec_sort function takes 38.8053860664 What disappoints me is that even when the test_data size is 100,000,000, the sort function is still faster than vec_sort. Is there any way to accelerate the vec_sort function?

    Read the article

  • implement SIMD in C++

    - by Hristo
    I'm working on a bit of code and I'm trying to optimize it as much as possible, basically get it running under a certain time limit. The following makes the call... static affinity_partitioner ap; parallel_for(blocked_range<size_t>(0, T), LoopBody(score), ap); ... and the following is what is executed. void operator()(const blocked_range<size_t> &r) const { int temp; int i; int j; size_t k; size_t begin = r.begin(); size_t end = r.end(); for(k = begin; k != end; ++k) { // for each trainee temp = 0; for(i = 0; i < N; ++i) { // for each sample int trr = trRating[k][i]; int ei = E[i]; for(j = 0; j < ei; ++j) { // for each expert temp += delta(i, trr, exRating[j][i]); } } myscore[k] = temp; } } I'm using Intel's TBB to optimize this. But I've also been reading about SIMD and SSE2 and things along that nature. So my question is, how do I store the variables (i,j,k) in registers so that they can be accessed faster by the CPU? I think the answer has to do with implementing SSE2 or some variation of it, but I have no idea how to do that. Any ideas? Thanks, Hristo

    Read the article

  • Add two 32-bit integers in Assembler for use in VB6

    - by Emtucifor
    I would like to come up with the byte code in assembler (assembly?) for Windows machines to add two 32-bit longs and throw away the carry bit. I realize the "Windows machines" part is a little vague, but I'm assuming that the bytes for ADD are pretty much the same in all modern Intel instruction sets. I'm just trying to abuse VB a little and make some things faster. So... if the string "8A4C240833C0F6C1E075068B442404D3E0C20800" is the assembly code for SHL that can be "injected" into a VB6 program for a fast SHL operation expecting two Long parameters (we're ignoring here that 32-bit longs in VB6 are signed, just pretend they are unsigned), what is the hex string of bytes representing assembler instructions that will do the same thing to return the sum? The hex code above for SHL is, according to the author: mov eax, [esp+4] mov cl, [esp+8] shl eax, cl ret 8 I spit those bytes into a file and tried unassembling them in a windows command prompt using the old debug utility, but I figured out it's not working with the newer instruction set because it didn't like EAX when I tried assembling something but it was happy with AX. I know from comments in the source code that SHL EAX, CL is D3E0, but I don't have any reference to know what the bytes are for instruction ADD EAX, CL or I'd try it. I tried flat assembler and am not getting anything I can figure out how to use. I used it to assemble the original SHL code and got a very different result, not the same bytes. Help?

    Read the article

  • java GC periodically enters into several full GC cycles

    - by Peter
    Environment: sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz my application description: high intensity of concurrent(java 5 concurrency used) operations completed each time by commit to oracle. it's about 20-30 threads run non stop, doing tasks. application runs in JBOSS web container. My GC starts work normally, I see a lot of small GCs and all that time CPU shows good load, like all 4 cores loaded to 40-50%, CPU graph is stable. Then , after 1 min of good work, CPU starts drop to 0% on 2 cores from 4, it's graph becomes unstable, goes up and down("teeth"). I see, that my threads work slower(I have monitoring), I see that GC starts produce a lot of FULL GC during that time and next 4-5 minutes this situation remains as is, then for short period of time, like 1 minute, it gets back to normal situation, but shortly after that all bad thing repeats. Question: Why I have so frequent full GC??? How to prevent that? I played with SurvivorRatio - does not help. I noticed, that application behaves normally until first FULL GC occurs, while I have enough memory. Then it runs badly. my GC LOG: starts good then long period of FULL GCs(many of them) 1027.861: [GC 942200K-623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K-625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K-632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K-637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K-972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K-619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K-626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K-634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K-641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K-648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs]

    Read the article

  • How to know if a device can be disabled or not?

    - by user326498
    I use the following code to enable/disable a device installed on my computer: SP_PROPCHANGE_PARAMS params; memset(&params, 0, sizeof(params)); devParams.cbSize = sizeof(devParams); params.ClassInstallHeader.cbSize = sizeof(params.ClassInstallHeader); params.ClassInstallHeader.InstallFunction = DIF_PROPERTYCHANGE; params.Scope = DICS_FLAG_GLOBAL; params.StateChange = DICS_DISABLE ; params.HwProfile = 0; // current profile if(!SetupDiSetClassInstallParams(m_hDev, &m_hDevInfo,&params.ClassInstallHeader,sizeof(SP_PROPCHANGE_PARAMS))) { dwErr = GetLastError(); return FALSE; } if(!SetupDiCallClassInstaller(DIF_PROPERTYCHANGE,m_hDev,&m_hDevInfo)) { dwErr = GetLastError(); return FALSE; } return TRUE; This code works perfectly only for those devices that can also be disabled by using Windows Device Manager, and won't work for some un-disabled devices such as my cpu device: Intel(R) Pentium(R) Dual CPU E2160 @ 1.80GHz. So the problem is how to determine if a device can be disabled or not programmatically? Is there any API to realize this goal? Thank you!

    Read the article

  • Javascript JQUERY AJAX: When Are These Implemented

    - by Michael Moreno
    I'm learning javascript. Poked around this excellent site to gather intel. Keep coming across questions / answers about javascript, JQUERY, JQUERY with AJAX, javascript with JQUERY, AJAX alone. My conclusion: these are all individually powerful and useful. My confusion: how does one determine which/which combination to use ? I've concluded that javascript is readily available on most browsers. For example, I can extend a simple HTML page with <html> <body> <script type="text/javascript"> document.write("Hello World!"); </script> </body> </html> However, within the scope of Python/DJANGO, many of these questions are JQUERY and AJAX related. At which point or under what development circumstances would I conclude that javascript alone isn't going to "cut it", and I need to implement JQUERY and/or AJAX and/or some other permutation ?

    Read the article

  • delphi app freezes whole win7 system

    - by avar
    Hello i have a simple program that sorts a text file according to length of words per line this program works without problems in my xp based old machine now i run this program on my new win7/intel core i5 machine, it freezes whole system and back normal after it finishes it's work. i'v invastigated the code and found the line causing the freeze it was this specific line... caption := IntToStr(i) + '..' + IntTostr(ii); i'v changed it to caption := IntTostr(ii); //slow rate change and there is no freeze and then i'v changed it to caption := IntTostr(i); //fast rate change and it freeze again my main complete procedure code is var tword : widestring; i,ii,li : integer; begin tntlistbox1.items.LoadFromFile('d:\new folder\ch.txt'); tntlistbox2.items.LoadFromFile('d:\new folder\uy.txt'); For ii := 15 Downto 1 Do //slow change Begin For I := 0 To TntListBox1.items.Count - 1 Do //very fast change Begin caption := IntToStr(i) + '..' + IntTostr(ii); //problemetic line tword := TntListBox1.items[i]; LI := Length(tword); If lI = ii Then Begin tntlistbox3.items.Add(Trim(tntlistbox1.Items[i])); tntlistbox4.items.Add(Trim(tntlistbox2.Items[i])); End; End; End; end; any idea why ? and how to fix it? i use delphi 2007/win32

    Read the article

  • How do i write tasks? (parallel code)

    - by acidzombie24
    I am impressed with intel thread building blocks. I like how i should write task and not thread code and i like how it works under the hood with my limited understanding (task are in a pool, there wont be 100 threads on 4cores, a task is not guaranteed to run because it isnt on its own thread and may be far into the pool. But it may be run with another related task so you cant do bad things like typical thread unsafe code). I wanted to know more about writing task. I like the 'Task-based Multithreading - How to Program for 100 cores' video here http://www.gdcvault.com/sponsor.php?sponsor_id=1 (currently second last link. WARNING it isnt 'great'). My fav part was 'solving the maze is better done in parallel' which is around the 48min mark (you can click the link on the left side. That part is really all you need to watch if any). However i like to see more code examples and some API of how to write task. Does anyone have a good resource? I have no idea how a class or pieces of code may look after pushing it onto a pool or how weird code may look when you need to make a copy of everything and how much of everything is pushed onto a pool.

    Read the article

  • GSM Cell Towers Location & Triangulation Algorithm (Similar to OpenCellID / Skyhook / Google's MyLocation)

    - by ranabra
    Hi all, assuming I have a Fingerprint DB of Cell towers. The data (including Long. & Lat. CellID, signal strength, etc) is achieved by 'wardriving', similar to OpenCellID.org. I would like to be able to get the location of the client mobile phone without GPS (similar to OpenCellID / Skyhook Wireless/ Google's 'MyLocation'), which sends me info on the Cell towers it "sees" at the moment: the Cell tower connected to, and another 6 neighboring cell towers (assuming GSM). I have read and Googled it for a long time and came across several effective theories, such as using SQL 2008 Spatial capabilities, or using an euclidean algorithm, or Markov Model. However, I am lacking a practical solution, preferably in C# or using SQL 2008 :) The location calculation will be done on the server and not on the client mobile phone. the phone's single job is to send via HTTP/GPRS, the tower it's connected to and other neighboring cell towers. Any input is appreciated, I have read so much and so far haven't really advanced much. Thanx

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • UDP sockets in ad hoc network (Ubuntu 9.10)

    - by Ekhiotz
    Hi! I am using BSD sockets in Ubuntu 9.10 to send UDP packets in broadcast with the following code: sock_fd = socket(PF_INET,SOCK_DGRAM,IPPROTO_UDP); //sock_fd=socket(AF_INET,SOCK_DGRAM,0); receiver_addr.sin_family = PF_INET; //does not send with broadcast in ad hoc receiver_addr.sin_addr.s_addr = htonl(INADDR_BROADCAST); inet_aton("169.254.255.255",&receiver_addr.sin_addr); receiver_addr.sin_port = htons(port); int broadcast = 1; // this call is what allows broadcast packets to be sent: if (setsockopt(sock_fd, SOL_SOCKET, SO_BROADCAST, &broadcast, sizeof broadcast) == -1) { perror("setsockopt (SO_BROADCAST)"); exit(1); } ret=sendto(sock_fd, packet, size, 0,(struct sockaddr*)&receiver_addr,sizeof(receiver_addr)); Note that is not all the code, it is only to have an idea. The program sends all the data with INADDR_BROADCAST if I am connected to an infrastructure wireless network. However, if my laptop is connected to an ad-hoc network, it is able to receive all the data, but not to send it. I have solved the problem using the 169.254.255.255 broadcast address, but I would like to know what is going on. Thank you in advance!

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >