Search Results

Search found 7940 results on 318 pages for 'intel wireless'.

Page 301/318 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • theoretical and practical matrix multiplication FLOP

    - by mjr
    I wrote traditional matrix multiplication in c++ and tried to measure and compare its theoretical and practical FLOP. As I know inner loop of MM has 2 operation therefore simple MM theoretical Flops is 2*n*n*n (2n^3) but in practice I get something like 4n^3 + number of operation which is 2 i.e. 6n^3 also if I just try to add up only one array a[i][j]++ practical flops then calculate like 3n^3 and not n^3 as you see again it is 2n^3 +1 operation and not 1 operation * n^3 . This is in case if I use 1D array in three nested loops as Matrix multiplication and compare flop, practical flop is the same (near) the theoretical flop and depend exactly as the number of operation in inner loop.I could not find the reason for this behaviour. what is the reason in both case? I know that theoretical flop is not the same as practical one because of some operations like load etc. system specification: Intel core2duo E4500 3700g memory L2 cache 2M x64 fedora 17 sample results: Matrix matrix multiplication 512*512 Real_time: 1.718368 Proc_time: 1.227672 Total flpops: 807,107,072 MFLOPS: 657.429016 Real_time: 3.608078 Proc_time: 3.042272 Total flpops: 807,024,448 MFLOPS: 265.270355 theoretical flop: 2*512*512*512=268,435,456 Practical flops= 6*512^3 =807,107,072 Using 1 dimensional array float d[size][size]:512 or any size for (int j = 0; j < size; ++j) { for (int k = 0; k < size; ++k) { d[k]=d[k]+e[k]+f[k]+g[k]+r; } } Real_time: 0.002288 Proc_time: 0.002260 Total flpops: 1,048,578 MFLOPS: 464.027161 theroretical flop: *4n^2=4*512^2=1,048,576* practical flop : 4n^2+overhead (other operation?)=1,048,578 3 loop version: Real_time: 1.282257 Proc_time: 1.155990 Total flpops: 536,872,000 MFLOPS: 464.426117 theoretical flop:4n^3 = 536,870,912 practical flop: *4n^3=4*512^3+overheads(other operation?)=536,872,000* thank you

    Read the article

  • "QFontEngine(Win) GetTextMetrics failed ()" error on 64-bit Windows

    - by David Murdoch
    I'll add 500 of my own rep as a bounty when SO lets me. I'm using wkhtmltopdf to convert HTML web pages to PDFs. This works perfectly on my 32-bit dev server [unfortunately, I can't ship my machine :-p ]. However, when I deploy to the web application's 64-bit server the following errors are displayed: C:\>wkhtmltopdf http://www.google.com google.pdf Loading pages (1/5) QFontEngine::loadEngine: GetTextMetrics failed () ] 10% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () ] 36% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () // ...etc.... and the PDF is created and saved... just WITHOUT text. All form-fields, images, borders, tables, divs, spans, ps, etc are rendered accurately...just void of any text at all. Server information: Windows edition: Windows Server Standard Service Pack 2 Processor: Intel Xeon E5410 @ 2.33GHz 2.33 GHz Memory: 8.00 GB System type: 64-bit Operating System Can anyone give me a clue as to what is happening and how I can fix this? Also, I wasn't sure what to tag/title this question with...so if you can think of better tags/title comment them or edit the question. :-)

    Read the article

  • Creating a new workbook in Excel from Python breaks

    - by Marcelo Cantos
    I am trying to use the stock standard win32com approach to drive Excel 2007 from Python. However, when I try to create a new workbook, things go pear-shaped: Python 2.6.4 (r264:75706, Nov 3 2009, 13:23:17) [MSC v.1500 32 bit (Intel)] on win32 ... >>> import win32com.client >>> excel = win32com.client.Dispatch("Excel.Application") >>> wb = excel.Workbooks.Add() Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> wb = excel.Workbooks.Add() File "C:\Python26\lib\site-packages\win32com\client\dynamic.py", line 467, in __getattr__ if self._olerepr_.mapFuncs.has_key(attr): return self._make_method_(attr) File "C:\Python26\lib\site-packages\win32com\client\dynamic.py", line 295, in _make_method_ methodCodeList = self._olerepr_.MakeFuncMethod(self._olerepr_.mapFuncs[name], methodName,0) File "C:\Python26\lib\site-packages\win32com\client\build.py", line 297, in MakeFuncMethod return self.MakeDispatchFuncMethod(entry, name, bMakeClass) File "C:\Python26\lib\site-packages\win32com\client\build.py", line 318, in MakeDispatchFuncMethod s = linePrefix + 'def ' + name + '(self' + BuildCallList(fdesc, names, defNamedOptArg, defNamedNotOptArg, defUnnamedArg, defOutArg) + '):' File "C:\Python26\lib\site-packages\win32com\client\build.py", line 604, in BuildCallList argName = MakePublicAttributeName(argName) File "C:\Python26\lib\site-packages\win32com\client\build.py", line 542, in MakePublicAttributeName return filter( lambda char: char in valid_identifier_chars, className) File "C:\Python26\lib\site-packages\win32com\client\build.py", line 542, in <lambda> return filter( lambda char: char in valid_identifier_chars, className) UnicodeDecodeError: 'ascii' codec can't decode byte 0x83 in position 52: ordinal not in range(128) >>> What is going wrong here? Have I done something silly, or is Python/win32com/Excel somehow broken?

    Read the article

  • Asymptotic complexity of a compiler

    - by Meinersbur
    What is the maximal acceptable asymptotic runtime of a general-purpose compiler? For clarification: The complexity of compilation process itself, not of the compiled program. Depending on the program size, for instance, the number of source code characters, statements, variables, procedures, basic blocks, intermediate language instructions, assembler instructions, or whatever. This is highly depending on your point of view, so this is a community wiki. See this from the view of someone who writes a compiler. Will the optimisation level -O4 ever be used for larger programs when one of its optimisations takes O(n^6)? Related questions: When is superoptimisation (exponential complexity or even incomputable) acceptable? What is acceptable for JITs? Does it have to be linear? What is the complexity of established compilers? GCC? VC? Intel? Java? C#? Turbo Pascal? LCC? LLVM? (Reference?) If you do not know what asymptotic complexity is: How long are you willing to wait until the compiler compiled your project? (scripting languages excluded)

    Read the article

  • VPython in Eclipse - thinks it has the wrong architecture type.

    - by Duncan Tait
    Evening, So I've recently installed VPython on my MacBook (OS X, Snow Leopard) - and it works absolutely fine in IDLE and from the command line (interactive mode). However, eclipse has issues. Firstly it couldn't find it (which is a bit of an issue actually with all these 'easy install' python modules - when they don't tell you where they actually install to!) but I searched it out in the depths of Library\Frameworks... and added that to the System PYTHONPATH listbox in Eclipse. Now it can find it, but it says the following: Traceback (most recent call last): File "/Users/duncantait/dev/workspace/Network_Simulation/src/Basic/Net_Sim1.py", line 15, in <module> import visual File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/visual/__init__.py", line 59, in <module> import cvisual ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/visual/cvisual.so, 2): no suitable image found. Did find: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/visual/cvisual.so: mach-o, but wrong architecture I am guessing that VPython might not be built for a 64-bit architecture (Intel), but the fact remains that it works in both IDLE and command prompt... So there must be a way to configure Eclipse to run it right? (Wishful thinking). Thanks for any help! Duncan

    Read the article

  • Surprising results with .NET multi-theading algorithm

    - by Myles J
    Hi, I've recently wrote a C# console time tabling algorithm that is based on a combination of a genetic algorithm with a few brute force routines thrown in. The initial results were promising but I figured I could improve the performance by splitting the brute force routines up to run in parallel on multi processor architectures. To do this I used the well documented Producer/Consumer model (as documented in this fantastic article http://www.albahari.com/threading/part2.aspx#_ProducerConsumerQWaitHandle). I changed my code to create one thread per logical processor during the brute force routines. The performance gains on my work station were very pleasing. I am running Windows XP on the following hardware: Intel Core 2 Quad CPU 2.33 GHz 3.49 GB RAM Initial tests indicated average performance gains of approx 40% when using 4 threads. The next step was to deploy the new multi-threading version of the algorithm to our higher spec UAT server. Here is the spec of our UAT server: Windows 2003 Server R2 Enterprise x64 8 cpu (Quad-Core) AMD Opteron 2.70 GHz 255 GB RAM After running the first round of tests we were all extremely surprised to find that the algorithm actually runs slower on the high spec W2003 server than on my local XP work station! In fact the tests seem to indicate that it doesn't matter how many threads are generated (tests were ran with the app spawning between 2 to 32 threads). The algorithm always runs significantly slower on the UAT W2003 server? How could this be? Surely the app should run faster on a 8 cpu (Quad-Core) than my 2 Quad work station? Why are we seeing no performance gains with the multi-threading on the W2003 server whilst the XP workstation tests show gains of up to 40%? Any help or pointers would be appreciated. Regards Myles

    Read the article

  • can't properly shutdown ubuntu and sound problem on HP mini 311 netbook

    - by Viele
    hi, I recently bought a HP mini 311 netbook. I replaced its harddrive and installed ubuntu 10.04. Since then, I have encountered some very strange problem with its sound and shutdown/reboot. At times, when I start the computer, it will have no sound, on the GUI the sound is at max, but no sound is available. This sometimes also happen after after upgrades, hibernate, and toggling the wireless radio button. Strangely, when the sound is out the device will also refuse to be shutdown. If I shutdown the computer using the GUI, it will simply go back to the log in screen without actually shutting down. If I use "sudo shutdown 0", the computer will hang on the loading screen of the shut down process. I had to force the pc to shutdown by holding the power button down. usually (probably always) after I force a turn-off then start off again. the sound and shutdown become normal. I wonder if anyone have clues regarding to the cause of this problem. This the info about the computer: 1) installed ubuntu 10.04LTS RC, later upgraded to formal released version. 2) cat /proc/asound/version == "Advanced Linux Sound Architecture Driver Version 1.0.21." however, when doing 'alsaconf' the version displayed is 1.0.23 any help is appreciated. Thanks

    Read the article

  • MacBook Pro - Aquamacs - spell check

    - by peggy Li
    I have tried to use spell check for aquamac. I highlighted a region of the text. Then clicked Edit, then spell check region. I got the error message: Error : No word lists can be found for the language "en_US". Then I went to the website to download the following dictionaries: CocoAspell : I just clicked the download button. It was reported that the download was successful. However, when I tested it and highlighted a text region and clicked spell check. The same error message came out. Do I need to pull the downloaded .pkg to a certain place, such as the application folder, before I opened the .pkg? Or what else do I need to do make it work? I also downloaded the base package Aspell (for Intel) and the pre-built dictionaries as (as the instruction of the website), just the same way as point 1. I still got the same error message. Again, Do I need to pull the downloaded .pkg to a certain place, such as the application folder, before I opened the .pkg? Or what else do I need to do make it work? I would be greatly appreciated if someone could give me some help? Peggy Li

    Read the article

  • Academic question: typename

    - by Arman
    Hi, recently I accounted with a "simple problem" of porting code from VC++ to gcc/intel. The code is compiles w/o error on VC++: #include <vector> using std::vector; template <class T> void test_vec( std::vector<T> &vec) { typedef std::vector<T> M; /*==> add here typename*/ M::iterator ib=vec.begin(),ie=vec.end(); }; int main() { vector<double> x(100, 10); test_vec<double>(x); return 0; } then with g++ we have some unclear errors: g++ t.cpp t.cpp: In function 'void test_vec(std::vector<T, std::allocator<_CharT> >&)': t.cpp:13: error: expected `;' before 'ie' t.cpp: In function 'void test_vec(std::vector<T, std::allocator<_CharT> >&) [with T = double]': t.cpp:18: instantiated from here t.cpp:12: error: dependent-name 'std::M::iterator' is parsed as a non-type, but instantiation yields a type t.cpp:12: note: say 'typename std::M::iterator' if a type is meant If we add typename before iterator the code will compile w/o pb. If it is possible to make a compiler which can understand the code written in the more "natural way", then for me is unclear why we should add typename? Which rules of "C++ standards"(if there are some) will be broken if we allow all compilers to use without "typename"? kind regards Arman.

    Read the article

  • Optimizing mathematics on arrays of floats in Ada 95 with GNATC

    - by mat_geek
    Consider the bellow code. This code is supposed to be processing data at a fixed rate, in one second batches, It is part of an overal system and can't take up too much time. When running over 100 lots of 1 seconds worth of data the program takes 35 seconds; or 35%. How do I improce the code to get the processing time down to a minimum? The code will be running on an Intel Pentium-M which is a P3 with SSE2. package FF is new Ada.Numerics.Generic_Elementary_Functions(Float); N : constant Integer := 820; type A is array(1 .. N) of Float; type A3 is array(1 .. 3) of A; procedure F(state : in out A3; result : out A3; l : in A; r : in A) is s : Float; t : Float; begin for i in 1 .. N loop t := l(i) + r(i); t := t / 2.0; state(1)(i) := t; state(2)(i) := t * 0.25 + state(2)(i) * 0.75; state(3)(i) := t * 1.0 /64.0 + state(2)(i) * 63.0 /64.0; for r in 1 .. 3 loop s := state(r)(i); t := FF."**"(s, 6.0) + 14.0; if t > MAX then t := MAX; elsif t < MIN then t := MIN; end if; result(r)(i) := FF.Log(t, 2.0); end loop; end loop; end;

    Read the article

  • Bare-metal virtualisation for the desktop

    - by Andrew Taylor
    Hi, Does anyone have any knowledge about bare-metal virtualisation products? I'm interested in building a new desktop machine for home, I've been looking at the Intel Quad Core processors and I'd like to put 8GB of RAM in there, but, it got me thinking about making the most out of the available resources. I thought if I could get a good 64bit machine, put some bare-metal virtualisation on, then have a primary system, I'd also be able to bring up some extra virtualised systems as and when I needed. I know most of the bare metal systems are designed for the server market, but, is there anything out there that works well for a desktop. What are the caveats? I presume I won't be able to make the most out of any video cards I could buy, what about just getting a decent screen resolution, will this be a problem? I run a single 24" screen. What about DVD/CD writing, is this possible? I'd like to re-rip my CD collection, I was hoping the quad 64Bit goodness would help me out with the encoding. I currently use a Mac and couldn't go back to windows so that leaves Linux, I was thinking a primary OS of ubuntu. Does this make a difference? Thanks Andrew

    Read the article

  • Full-text search error during full-text index population : Error Code '0x80092003'

    - by user360074
    Dear All, I have problem with Full-Text Search service in production environment. Each time I rebuild full-text catalog, there is no error in User Interface, but there is no data in Full-Text Catalog Item Count : 0 Catalog size : 0 MB OS : Windows Server 2003 R2 Standard Edition Service Pack2 SQL Server Version : Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86) Oct 14 2005 00:33:37 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2) It work on dev server (windows xp professional version 2002 service pack 3) but error on prod server (Windows Server 2003 R2 Standard Edition Service Pack2) This is error log. Scrawl Log: 2010-06-02 03:51:31.06 spid24s Informational: Full-text Full population initialized for table or indexed view '[test1].[dbo].[test]' (table or indexed view ID '37575172', database ID '9'). Population sub-tasks: 1. 2010-06-02 03:51:31.06 spid24s Error '0x80092003' occurred during full-text index population for table or indexed view '[test1].[dbo].[test]' (table or indexed view ID '37575172', database ID '9'), full-text key value 0x00000006. Attempt will be made to reindex it. 2010-06-02 03:51:31.06 spid24s The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\MSFTE.DLL'. 2010-06-02 03:51:31.06 spid24s Error '0x80092003' occurred during full-text index population for table or indexed view '[test1].[dbo].[test]' (table or indexed view ID '37575172', database ID '9'), full-text key value 0x00000005. Attempt will be made to reindex it.

    Read the article

  • Application passwords and SQLite security

    - by Bryan
    I have been searching on google for information regarding application passwords and SQLite security for some time, and nothing that I have found has really answered my questions. Here is what I am trying to figure out: 1) My application is going to have an optional password activity that will be called when the application is first opened. My questions for this are a) If I store the password via android preference or SQLite database, how can I ensure security and privacy for the password, and b) how should password recovery be handled? Regarding b) from above, I have thought about requiring an email address when the password feature is enabled, and also a password hint question for use when requesting password recovery. Upon successfully answering the hint question, the password is then emailed to the email address that was submitted. I am not completely confident in the security and privacy of the email method, especially if the email is sent when the user is connected to an open, public wireless network. 2) My application will be using an SQLite database, which will be stored on the SD card if the user has one. Regardless of whether it is stored on the phone or the SD card, what options do I have for data encryption, and how does that affect the application performance? Thanks in advance for time taken to answer these questions. I think that there may be other developers struggling with the same concerns.

    Read the article

  • Why is the Clojure Hello World program so slow compared to Java and Python?

    - by viksit
    Hi all, I'm reading "Programming Clojure" and I was comparing some languages I use for some simple code. I noticed that the clojure implementations were the slowest in each case. For instance, Python - hello.py def hello_world(name): print "Hello, %s" % name hello_world("world") and result, $ time python hello.py Hello, world real 0m0.027s user 0m0.013s sys 0m0.014s Java - hello.java import java.io.*; public class hello { public static void hello_world(String name) { System.out.println("Hello, " + name); } public static void main(String[] args) { hello_world("world"); } } and result, $ time java hello Hello, world real 0m0.324s user 0m0.296s sys 0m0.065s and finally, Clojure - hellofun.clj (defn hello-world [username] (println (format "Hello, %s" username))) (hello-world "world") and results, $ time clj hellofun.clj Hello, world real 0m1.418s user 0m1.649s sys 0m0.154s Thats a whole, garangutan 1.4 seconds! Does anyone have pointers on what the cause of this could be? Is Clojure really that slow, or are there JVM tricks et al that need to be used in order to speed up execution? More importantly - isn't this huge difference in performance going to be an issue at some point? (I mean, lets say I was using Clojure for a production system - the gain I get in using lisp seems completely offset by the performance issues I can see here). The machine used here is a 2007 Macbook Pro running Snow Leopard, a 2.16Ghz Intel C2D and 2G DDR2 SDRAM. BTW, the clj script I'm using is from here and looks like, #!/bin/bash JAVA=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home/bin/java CLJ_DIR=/opt/jars CLOJURE=$CLJ_DIR/clojure.jar CONTRIB=$CLJ_DIR/clojure-contrib.jar JLINE=$CLJ_DIR/jline-0.9.94.jar CP=$PWD:$CLOJURE:$JLINE:$CONTRIB # Add extra jars as specified by `.clojure` file if [ -f .clojure ] then CP=$CP:`cat .clojure` fi if [ -z "$1" ]; then $JAVA -server -cp $CP \ jline.ConsoleRunner clojure.lang.Repl else scriptname=$1 $JAVA -server -cp $CP clojure.main $scriptname -- $* fi

    Read the article

  • Actual long double precision does not agree with std::numeric_limits

    - by dmb
    Working on Mac OS X 10.6.2, Intel, with i686-apple-darwin10-g++-4.2.1, and compiling with the -arch x86_64 flag, I just noticed that while... std::numeric_limits<long double>::max_exponent10 = 4932 ...as is expected, when a long double is actually set to a value with exponent greater than 308, it becomes inf--ie in reality it only has 64bit precision instead of 80bit. Also, sizeof() is showing long doubles to be 16 bytes, which they should be. Finally, using gives the same results as . Does anyone know where the discrepancy might be? long double x = 1e308, y = 1e309; cout << std::numeric_limits::max_exponent10 << endl; cout << x << '\t' << y << endl; cout << sizeof(x) << endl; gives 4932 1e+308 inf 16

    Read the article

  • When sending headers to download a PDF, Safari appends .html

    - by alex
    Here is the request and response headers http://www.example.com/get/pdf GET /~get/pdf HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.example.com Cookie: etc HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 02:20:43 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 X-Powered-By: Me Expires: Thu, 19 Nov 1981 08:52:00 GMT Pragma: no-cache Cache-Control: private Content-Disposition: attachment; filename="File #1.pdf" Content-Length: 18776 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 ---------------------------------------------------------- Basically, the response headers are sent by DOMPDF's stream() method. In Firefox, the file is prompted as File #1.pdf. However, in Safari, the file is saved as File #1.pdf.html. Does anyone know why Safari is appending the html extension to the filename?

    Read the article

  • GenerateDSYMFile warning: unable to open object file

    - by regulus6633
    The background: I have a project that I last built on 10.5 on a PPC computer using xcode v3.1. It builds against the 10.4 SDK. I now have a MacBook with 10.6 on it and Xcode v3.2.1. I installed the 10.4 SDK with xcode. So now I want to build the project on an intel chip on 10.6. I first get a build error because I have the wrong version of gcc setup so I change the build settings to use gcc 4.0. The problem: Now when I build the project I get the following warning: GenerateDSYMFile "build/Release/What's Keeping Me?.app.dSYM" "build/Release/What's Keeping Me?.app/Contents/MacOS/What's Keeping Me?" cd "/Users/hmcshane/Development/ Cocoa Projects/What's Keeping Me?" /Developer/usr/bin/dsymutil "/Users/hmcshane/Development/ Cocoa Projects/What's Keeping Me?/build/Release/What's Keeping Me?.app/Contents/MacOS/What's Keeping Me?" -o "/Users/hmcshane/Development/ Cocoa Projects/What's Keeping Me?/build/Release/What's Keeping Me?.app.dSYM" warning: (i386) /Users/hmcshane/Downloads/Csu-71/crt.dynamic_no_pic.o unable to open object file warning: (ppc7400) /Users/hmcshane/Downloads/Csu-71/crt.dynamic_no_pic.o unable to open object file Any idea what this is? And why is the path for the problem files rooted in my downloads folder? The project certainly doesn't reside there.

    Read the article

  • some help required while working on java and cigwin together .

    - by Hippo
    Hello .. I am new to java and also cygwin . I do not have in detailed knowledge of both . I need some help.. I simple steps i will try to explain my problem. 1) I am working on tinyOS . its open source OS , used for wireless sensor networks. It provides java libraries to work on communication (PC to sensor) 2) I am working on windows xp environment through cigwin. 3) I am developing an application . THis application requires one java interface called "Serial Forwarder" , which is readily available in libraries provided. Previously i used to start this interface manually (by entering command *"java net.tinyos.sf.SerialForwarder ")*and then my application which uses this interface. But now i want to make my application independent . User need know about this background cygwin commands . 4) So in my java application i used "Runtime.getRuntime().exec( "java net.tinyos.sf.SerialForwarder)" . 5) This i neither giving any error nor starting the interface. Am I going on right way ? When i am using runtime execute command , how can i make sure that this command is called through cigwin interface ? Also .. if i want to write .bat file .. i which i can give commands which will be executed .. how can i make sure that those commands are given through cigwin .. and not through cmd.exe .. Please help . me .

    Read the article

  • Pydev and Django: Shell not finding certain modules?

    - by Rosarch
    I am developing a Django project with PyDev in Eclipse. For a while, PyDev's Django Shell worked great. Now, it doesn't: >>> import sys; print('%s %s' % (sys.executable or sys.platform, sys.version)) C:\Python26\python.exe 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] >>> >>> from django.core import management;import mysite.settings as settings;management.setup_environ(settings) Traceback (most recent call last): File "<console>", line 1, in <module> ImportError: No module named mysite.settings >>> The dev server runs just fine. What could I be doing wrong? The models module is also conspicuously absent: >>> import mysite.myapp.models Traceback (most recent call last): File "<console>", line 1, in <module> ImportError: No module named mysite.myapp.models On the normal command line, outside of PyDev, the shell works fine. Why could this be happening?

    Read the article

  • The 10.6.3 os x update broke simulated key-presses for Nestopia.

    - by Lou Z.
    The iPhone app that I released is a wireless game controller, it translates touches on the device into key-presses on the networked Mac. This allowed for playing emulator (e.g. Nestopia) games using the iPhone as a controller. Of course, the day that I released it coincided with an os x update. After installing this update, the simulated key-presses no longer work in Nestopia! The crazier thing is, when I go to 'File Open' within Nestopia, I can cycle through the file list by hitting the up-arrow on my iphone controller; i.e. the simulated key-presses work in menu items, but not in the game itself. The code that I use to simulate keys is below. Given the list of changes here, can anyone identify which change would cause this problem? Thanks!! #define UP false #define DOWN true -(void)sendKey:(CGKeyCode)keycode andKeyDirection:(BOOL)keydirection{ CGEventRef eventRef = CGEventCreateKeyboardEvent(NULL, keycode, keydirection); CGEventPost(kCGSessionEventTap, eventRef); CFRelease(eventRef); }

    Read the article

  • How to know if a graphics card provides hardware rendering for wpf

    - by happyclicker
    I have to run a wpf-app in an environment that has all the same dell-pc's with an intel gma 3000 graphics chip (onbard, Q963/Q965). The app renders only with software rendering (Stated so by the RenderCapability.Tier-property (it says the rendering tier is 0!) and I also see this with Perforator). On all of this machines, DirectX 9c is installed and DXDiag states on many but not on all of this machines, that Direct-3d and Direct-Draw-acceleration is activated. I checked also the registry if the setup of these machines disabled wpf-hw rendering but that's also not the case. On one machine I also updated the video-driver and dx with no success. I found a lot of ressources that say, that directX must be installed and active, so that wpf does not use its own software renderer but uses the DirectX HW-Rendering. But on the above machines, DX9c is installed but there is no hw rendering. May it be that wpf uses dx-graphicscards but does the communication with the graphics card direct and not over dx? How can I find out if a specific graphics-chip has to support hardware rendering for wpf or not. The statement that the graphics card must support dx 9c seems not to be the only condition. The second question is, if wpf renders through dx, is this done through direct-3d or is direct-draw used. Is there any good documentation on this topic?

    Read the article

  • Where are the real risks in network security?

    - by Barry Brown
    Anytime a username/password authentication is used, the common wisdom is to protect the transport of that data using encryption (SSL, HTTPS, etc). But that leaves the end points potentially vulnerable. Realistically, which is at greater risk of intrusion? Transport layer: Compromised via wireless packet sniffing, malicious wiretapping, etc. Transport devices: Risks include ISPs and Internet backbone operators sniffing data. End-user device: Vulnerable to spyware, key loggers, shoulder surfing, and so forth. Remote server: Many uncontrollable vulnerabilities including malicious operators, break-ins resulting in stolen data, physically heisting servers, backups kept in insecure places, and much more. My gut reaction is that although the transport layer is relatively easy to protect via SSL, the risks in the other areas are much, much greater, especially at the end points. For example, at home my computer connects directly to my router; from there it goes straight to my ISPs routers and onto the Internet. I would estimate the risks at the transport level (both software and hardware) at low to non-existant. But what security does the server I'm connected to have? Have they been hacked into? Is the operator collecting usernames and passwords, knowing that most people use the same information at other websites? Likewise, has my computer been compromised by malware? Those seem like much greater risks. What do you think?

    Read the article

  • Unable to aquire image through ImageIO.read(url) because of connection timed out.

    - by Jake Frederix
    Following code always fails URL url = new URL("http://userserve-ak.last.fm/serve/126/8636005.jpg"); Image img = ImageIO.read(url); System.out.println(img); I've manually checked the url, and it is valid, and contains a valid jpg image. The problem I get is; Exception in thread "main" javax.imageio.IIOException: Can't get input stream from URL! at javax.imageio.ImageIO.read(ImageIO.java:1385) at maestro.Main2.main(Main2.java:25) Caused by: java.net.ConnectException: Connection timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:310) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:176) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:163) at java.net.Socket.connect(Socket.java:546) at java.net.Socket.connect(Socket.java:495) at sun.net.NetworkClient.doConnect(NetworkClient.java:174) at sun.net.www.http.HttpClient.openServer(HttpClient.java:409) at sun.net.www.http.HttpClient.openServer(HttpClient.java:530) at sun.net.www.http.HttpClient.(HttpClient.java:240) at sun.net.www.http.HttpClient.New(HttpClient.java:321) at sun.net.www.http.HttpClient.New(HttpClient.java:338) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:814) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:755) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:680) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1005) at java.net.URL.openStream(URL.java:1029) at javax.imageio.ImageIO.read(ImageIO.java:1383) ... 1 more Java Result: 1 What does this mean? Funny thing is, if I change my internet-connection to that of the neighbour's wireless, it suddenly does work.

    Read the article

  • DDK/WDM developing problem ... driver won't load on x64 windows platform

    - by user295975
    Hi there! I am a beginner at DDK/WDM driver developing field. I have a task which involves porting a virtual device driver from x86 to x64 (intel). I got the source code, I modified it a bit and compiled it succesfuly with DDK (build environments). But when I tried to load it on a ia64 Windows7 machine it didn't want to load. Then I tried some simple examples of device drivers from --http://www.codeproject.com/KB/system/driverdev.aspx (I put '--' to be able to post the hyperlink) and from other links but still the same problem. I hear on a forum that some libraries that you use to link are not compatible with the new machines and suggested to link to another similar libraries...but still didn't worked. When I build I use "-cefw" command line parameters as suggested. I do not have an *.inf file asociated but I'm copying it in system32/drivers and I'm using WinObj to see if next restart it's loaded into the memory. I also tried this program ( http://www.codeproject.com/KB/system/tdriver.aspx ) to load the driver into the memory but still didn't worked for me. Please please help me...I'm stuck on this and my deadline already passed. I feel I'driving nuts in here trying to discover what am I doing wrong.

    Read the article

  • Some help required while working on Java and Cygwin together

    - by Hippo
    Hello .. I am new to java and also cygwin . I do not have in detailed knowledge of both . I need some help.. I simple steps i will try to explain my problem. 1) I am working on tinyOS . its open source OS , used for wireless sensor networks. It provides java libraries to work on communication (PC to sensor) 2) I am working on windows xp environment through cigwin. 3) I am developing an application . THis application requires one java interface called "Serial Forwarder" , which is readily available in libraries provided. Previously i used to start this interface manually (by entering command *"java net.tinyos.sf.SerialForwarder ")*and then my application which uses this interface. But now i want to make my application independent . User need know about this background cygwin commands . 4) So in my java application i used "Runtime.getRuntime().exec( "java net.tinyos.sf.SerialForwarder)" . 5) This i neither giving any error nor starting the interface. Am I going on right way ? When i am using runtime execute command , how can i make sure that this command is called through cigwin interface ? Also .. if i want to write .bat file .. i which i can give commands which will be executed .. how can i make sure that those commands are given through cigwin .. and not through cmd.exe .. Please help . me .

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >