Search Results

Search found 2072 results on 83 pages for 'nvidia ion'.

Page 64/83 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Why do GPUs overheat?

    - by JAD
    About a year ago, I added a 9800GT (1 GB version) and a Corsair CX500 PSU to an HP M8000N computer. A few weeks ago, the HDD overheated and I decided to transfer the GPU & PSU to a new build, which consists of: i3 @ 3.3Ghz Gigabyte H61 Micro ATX Mobo 4GB RAM 500GB WD HDD DVD RW Drive Cooler Master Elite 430 Tower Once I had Win7 up and running, I installed all the essential drivers that came with the Gigabyte Mobo CD. However, whenever I tried installing the Graphics Media Accelerator driver, the computer would crash and enter an endless boot sequence on the next startup. I skipped installing this driver and installed the CD driver for the 9800GT, which by now is a year old. Everything was working fine, WEI rated my GPU at 6.6 graphics & aero performance. However, after updating my Nvidia drivers to the latest, the WEI dropped my rating to 3.3 for Aero, and 4.7 for graphics performance. Just to make sure that everything was ok, I ran Bad Company 2 on medium settings. The first few minutes ran just fine at a smooth framerate, so I dismissed this as Windows being Windows. About 6 hours later, I ran BC2 again. This time I averaged anywhere from 2-5 FPS. I checked the GPU temperature through GPU-Z, and it came back as 120C. The problem with this, is that the computer was on for six hours up to that point. Wouldn't the card have experienced a reactor core meltdown a lot sooner than that? Granted, the computer was "sleeping" some of the time, but still... The next day I took out a temperature gun and ran some tests. I would point the laser at a very specific area on the reverse side of the card (not the fan or "front"), and compare the temp reading with GPU-Z. After leaving the system on idle on idle for a few minutes, I ran BC2 twice. Here are the results: GPU-Z Reading / Temp Gun Reading / Time Null / 22.3°C / Comp is Off 53°C / 33.5°C / 1:49 78°C / 46°C / 1:53 - (First BC2 run; good framerate) 102°C / 64.6°C / 2:01 - (System is again on idle) 113°C / 64.8°C / 2:10 119°C / 71.8°C / 2:17 - (Second BC2 run; poor framerate) I should also mention that I also took a temp recording of another part of the GPU from 2:01-2:17. The temp in this area jumped from 75°C to 82.9°C in that time frame. This pretty much confirms that GPU-Z is reporting the temperature accurately, and the card is overheating. But I'd like to know why; the cars is doing nothing and still the temperature climbs at a steady rate. I thoroughly cleaned the GPU and PSU when I salvaged them from the old HP M8000N computer with a can of compressed air, dust cant be the issue. Similarly, the rest of the computer is brand new. I installed various Nvidia drivers, but no luck. It seems strange to me that a year-old card is suddenly failing on me; aren't they supposed to last at least two years? Could this be a driver issue? Is the motherboard faulty? Could the PSU be overfeeding the card on voltage? Neither case seems likely, as the CPU, RAM and otherwise the rest of the comp has worked flawlessly and has stayed well within respectable temp ranges (the i3 lingers around 50C, the HDD stays at 30C, so does the PSU). How can I pinpoint the issue?

    Read the article

  • Vista 64-bit, DISK BOOT FAILURE

    - by weka
    So I have this Acer Aspire AX3200-U3600A with Windows Vista (64-bit). Every night I turn it off and turn it back on in the morning. Around three weeks ago, I did a fresh factory reimage. Good as new. Then around two days ago, when I turned it on, I noticed it was running extremly slow. As in, it would often freeze up while I had multiple applications open when it usually never froze up. So I decided to restart my computer. Big mistake. My computer froze right after I clicked shut-down. I waited a while. Nothing. Waited some minutes. Nope. I decided to shut it down by pressing the power button. Here is where the problems begin. When I turned it back on, I saw the Windows logo and loading bar and then it loaded to black. I turned it off again forcefully by power button and then once more... then I got: AMD Data Change... Update New Data to DMI! then later the screen clears and I get: AHCI Option ROM BIOS Revision: 01.05.92 Date: 02-19-2008 Copyright (c) 2006-2008 Phoenix Technologies, LTD Port 01: Reset Port Error!! Port 02: then the screen clears again but this time, this loads from the bottom: Nvidia Boot Agent 249.0542 (copyright stuff... blah blah) PXE-E61: Media test failure, check cable. PXE-M0F: Exiting Nvidia Boot Agent DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER. So I try to go into Safe Mode. Well, first of all it doesn't load as fast. After it loads disk.sys from windows/drivers, it will wait a while (2-3 mins) THEN load. However it loads the Acer eRecovery Management Tool. I have three options: Reset computer to factory default, Restore computer from user's backup, or Exit. However, the top two options are gray and disabled where as the Exit is in blue and definitely clickable. So obviously safe mode is not there... A strong thing to note: In the beginning when all of this started, I did a Boot Windows Normal from pressing f8 and I got to my desktop! It logged me in. I could see the icons on my files. However my desktop was extremely slow as in when I clicked on the Start menu, it would wait a while, then load up the menu with JUST the gradient, no text or icons... so as you can see... it saw my HDD? Also, before anyone says, I have NO USB plugged in. My mouse and keyboard are not USB inputs, I assure you. And this came without a recovery CD AND when I went in BIOS, to change the BOOT ORDER, I did NOT see a CD-ROM option. And when I tried pressing ALT+F10 to get into Acer eRecovery Management, the top two options were disabled as well. But sometimes on start-up, I get: Windows has encountered a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removeable storage is properly connected and then restart your computer. If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occured. Then I tried Last Known Good Configuration Settings, that gives me a BSOD. What should I do/

    Read the article

  • Gaming blew fuse and causes funny smell: how to overcome?

    - by George Tomlinson
    I've been gaming for a while now. When playing certain games this PC goes into overdrive. The fan/fans start/s to sound like a jet engine it/they get/s so busy. Also I have smelt burning when this has happened. The fuse blew on the 4 socket adapter I was using recently. On the following thread someone said this could be due to the PSU not being strong enough to handle the load, in what it seems could be a related issue someone had, although the person who posted this question did say that blowing a fan on their PC stopped it crashing in that case: http://www.tomshardware.co.uk/answers/id-2047543/gtx-650-overheating-issue.html. This is exactly what they said: Your GPU isn't overheating. 70+ before it would shutdown and cause a restart. Make sure your PSU is strong enough to handle your new system at load and possibly run Memtest to check your RAM (although not BSOD'ing and just shutting down points to the PSU). This (the PSU part) makes more sense to me than it being to do with dust etc, since it seems a more plausible explanation of why the fuse blew. The PC has no problems except when playing certain games: i.e. TERA Rising and WoW with add-ons (I think WoW is ok as long as I don't have more than 1 add-on (Healers Have To Die)). I'm just wondering if anyone knows or can suggest what I might be able to do to be able to play these games without this problem occurring. The PC's spec is this: Display: NVIDIA GeForce GTX 650 8GB RAM (6 available) Processor: AMD FX (tm) - 8120 Eight-Core Processor - 3.1 GHz, 4 Cores, 8 Logical Processors I have read on another post that forcing vsync in the Nvidia Control Panel helped with what seems could be a similar problem, so I plan to see if that solves it, God permitting. EDIT: I tried the Vsync thing, and it seems the situation may have improved, although this may be due to something else: i.e. maybe the PC was working harder yesterday, due to just having downloaded a few things or lots of things running. I'm still noticing the funny smell when playing TERA. It's not so much burning: it's more like glue. The smell might have had a burning element to it in the past, but I think it's always had a glue element. EDIT 2: the PSU is an 'ATX Switching Power Supply', Model E-500ATX. Other info it gives on the PSU is 230V, Current 10A and Frequency 50-60Hz. It also has some other info which I can supply if necessary. Putting the PC plug in the wall socket instead of the power strip seems like it might have reduced the load on the PC quite a bit: I think it sounds less stressed. it has been off for a while whilst I took the side panel off though, so I'll wait to see what happens before getting too excited. EDIT 3: hmm. So here's the latest: just playing TERA. The fan's running quite fast again. Hard to tell whether switching to the wall socket has made a difference in terms of strain on the PC: I don't know if one would expect it to. Still seems like it might have helped though. Oh and there didn't seem to be much dust in the PC, although I didn't disconnect any components. I'm still getting the glue type smell. ASIDE: reminds me of someone on a PC near me at the library once who was actually sniffing glue right there in front of everyone while on the PC and he started talking about how he was sniffing glue. lol. That's no joke. EDIT 4: So the questions now are: Question 1: Is the smell something I should sort out? (If so, how might I do this?) Question 2: is it necessary to take any steps to prevent blowing another fuse (and if so which step/s?).

    Read the article

  • Per-pixel displacement mapping GLSL

    - by Chris
    Im trying to implement a per-pixel displacement shader in GLSL. I read through several papers and "tutorials" I found and ended up with trying to implement the approach NVIDIA used in their Cascade Demo (http://www.slideshare.net/icastano/cascades-demo-secrets) starting at Slide 82. At the moment I am completly stuck with following problem: When I am far away the displacement seems to work. But as more I move closer to my surface, the texture gets bent in x-axis and somehow it looks like there is a little bent in general in one direction. EDIT: I added a video: click I added some screen to illustrate the problem: Well I tried lots of things already and I am starting to get a bit frustrated as my ideas run out. I added my full VS and FS code: VS: #version 400 layout(location = 0) in vec3 IN_VS_Position; layout(location = 1) in vec3 IN_VS_Normal; layout(location = 2) in vec2 IN_VS_Texcoord; layout(location = 3) in vec3 IN_VS_Tangent; layout(location = 4) in vec3 IN_VS_BiTangent; uniform vec3 uLightPos; uniform vec3 uCameraDirection; uniform mat4 uViewProjection; uniform mat4 uModel; uniform mat4 uView; uniform mat3 uNormalMatrix; out vec2 IN_FS_Texcoord; out vec3 IN_FS_CameraDir_Tangent; out vec3 IN_FS_LightDir_Tangent; void main( void ) { IN_FS_Texcoord = IN_VS_Texcoord; vec4 posObject = uModel * vec4(IN_VS_Position, 1.0); vec3 normalObject = (uModel * vec4(IN_VS_Normal, 0.0)).xyz; vec3 tangentObject = (uModel * vec4(IN_VS_Tangent, 0.0)).xyz; //vec3 binormalObject = (uModel * vec4(IN_VS_BiTangent, 0.0)).xyz; vec3 binormalObject = normalize(cross(tangentObject, normalObject)); // uCameraDirection is the camera position, just bad named vec3 fvViewDirection = normalize( uCameraDirection - posObject.xyz); vec3 fvLightDirection = normalize( uLightPos.xyz - posObject.xyz ); IN_FS_CameraDir_Tangent.x = dot( tangentObject, fvViewDirection ); IN_FS_CameraDir_Tangent.y = dot( binormalObject, fvViewDirection ); IN_FS_CameraDir_Tangent.z = dot( normalObject, fvViewDirection ); IN_FS_LightDir_Tangent.x = dot( tangentObject, fvLightDirection ); IN_FS_LightDir_Tangent.y = dot( binormalObject, fvLightDirection ); IN_FS_LightDir_Tangent.z = dot( normalObject, fvLightDirection ); gl_Position = (uViewProjection*uModel) * vec4(IN_VS_Position, 1.0); } The VS just builds the TBN matrix, from incoming normal, tangent and binormal in world space. Calculates the light and eye direction in worldspace. And finally transforms the light and eye direction into tangent space. FS: #version 400 // uniforms uniform Light { vec4 fvDiffuse; vec4 fvAmbient; vec4 fvSpecular; }; uniform Material { vec4 diffuse; vec4 ambient; vec4 specular; vec4 emissive; float fSpecularPower; float shininessStrength; }; uniform sampler2D colorSampler; uniform sampler2D normalMapSampler; uniform sampler2D heightMapSampler; in vec2 IN_FS_Texcoord; in vec3 IN_FS_CameraDir_Tangent; in vec3 IN_FS_LightDir_Tangent; out vec4 color; vec2 TraceRay(in float height, in vec2 coords, in vec3 dir, in float mipmap){ vec2 NewCoords = coords; vec2 dUV = - dir.xy * height * 0.08; float SearchHeight = 1.0; float prev_hits = 0.0; float hit_h = 0.0; for(int i=0;i<10;i++){ SearchHeight -= 0.1; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = coords + dUV * (1.0-hit_h) * 10.0f - dUV; vec2 Temp = NewCoords; SearchHeight = hit_h+0.1; float Start = SearchHeight; dUV *= 0.2; prev_hits = 0.0; hit_h = 0.0; for(int i=0;i<5;i++){ SearchHeight -= 0.02; NewCoords += dUV; float CurrentHeight = textureLod(heightMapSampler,NewCoords.xy, mipmap).r; float first_hit = clamp((CurrentHeight - SearchHeight - prev_hits) * 499999.0,0.0,1.0); hit_h += first_hit * SearchHeight; prev_hits += first_hit; } NewCoords = Temp + dUV * (Start - hit_h) * 50.0f; return NewCoords; } void main( void ) { vec3 fvLightDirection = normalize( IN_FS_LightDir_Tangent ); vec3 fvViewDirection = normalize( IN_FS_CameraDir_Tangent ); float mipmap = 0; vec2 NewCoord = TraceRay(0.1,IN_FS_Texcoord,fvViewDirection,mipmap); //vec2 ddx = dFdx(NewCoord); //vec2 ddy = dFdy(NewCoord); vec3 BumpMapNormal = textureLod(normalMapSampler, NewCoord.xy, mipmap).xyz; BumpMapNormal = normalize(2.0 * BumpMapNormal - vec3(1.0, 1.0, 1.0)); vec3 fvNormal = BumpMapNormal; float fNDotL = dot( fvNormal, fvLightDirection ); vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection ); float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) ); vec4 fvBaseColor = textureLod( colorSampler, NewCoord.xy,mipmap); vec4 fvTotalAmbient = fvAmbient * fvBaseColor; vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor; vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) ); color = ( fvTotalAmbient + (fvTotalDiffuse + fvTotalSpecular) ); } The FS implements the displacement technique in TraceRay method, while always using mipmap level 0. Most of the code is from NVIDIA sample and another paper I found on the web, so I guess there cannot be much wrong in here. At the end it uses the modified UV coords for getting the displaced normal from the normal map and the color from the color map. I looking forward for some ideas. Thanks in advance! Edit: Here is the code loading the heightmap: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mImageData); glGenerateMipmap(GL_TEXTURE_2D); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); Maybe something wrong in here?

    Read the article

  • GLSL compiler messages from different vendors [on hold]

    - by revers
    I'm writing a GLSL shader editor and I want to parse GLSL compiler messages to make hyperlinks to invalid lines in a shader code. I know that these messages are vendor specific but currently I have access only to AMD's video cards. I want to handle at least NVidia's and Intel's hardware, apart from AMD's. If you have video card from different vendor than AMD, could you please give me the output of following C++ program: #include <GL/glew.h> #include <GL/freeglut.h> #include <iostream> using namespace std; #define STRINGIFY(X) #X static const char* fs = STRINGIFY( out vec4 out_Color; mat4 m; void main() { vec3 v3 = vec3(1.0); vec2 v2 = v3; out_Color = vec4(5.0 * v2.x, 1.0); vec3 k = 3.0; float = 5; } ); static const char* vs = STRINGIFY( in vec3 in_Position; void main() { vec3 v(5); gl_Position = vec4(in_Position, 1.0); } ); void printShaderInfoLog(GLint shader) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetShaderInfoLog(shader, infoLogLen, &charsWritten, infoLog); cout << "Log:\n" << infoLog << endl; delete [] infoLog; } } void printProgramInfoLog(GLint program) { int infoLogLen = 0; int charsWritten = 0; GLchar *infoLog; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLogLen); if (infoLogLen > 0) { infoLog = new GLchar[infoLogLen]; glGetProgramInfoLog(program, infoLogLen, &charsWritten, infoLog); cout << "Program log:\n" << infoLog << endl; delete [] infoLog; } } void initShaders() { GLuint v = glCreateShader(GL_VERTEX_SHADER); GLuint f = glCreateShader(GL_FRAGMENT_SHADER); GLint vlen = strlen(vs); GLint flen = strlen(fs); glShaderSource(v, 1, &vs, &vlen); glShaderSource(f, 1, &fs, &flen); GLint compiled; glCompileShader(v); bool succ = true; glGetShaderiv(v, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Vertex shader not compiled." << endl; succ = false; } printShaderInfoLog(v); glCompileShader(f); glGetShaderiv(f, GL_COMPILE_STATUS, &compiled); if (!compiled) { cout << "Fragment shader not compiled." << endl; succ = false; } printShaderInfoLog(f); GLuint p = glCreateProgram(); glAttachShader(p, v); glAttachShader(p, f); glLinkProgram(p); glUseProgram(p); printProgramInfoLog(p); if (!succ) { exit(-1); } delete [] vs; delete [] fs; } int main(int argc, char* argv[]) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA); glutInitWindowSize(600, 600); glutCreateWindow("Triangle Test"); glewInit(); GLenum err = glewInit(); if (GLEW_OK != err) { cout << "glewInit failed, aborting." << endl; exit(1); } cout << "Using GLEW " << glewGetString(GLEW_VERSION) << endl; const GLubyte* renderer = glGetString(GL_RENDERER); const GLubyte* vendor = glGetString(GL_VENDOR); const GLubyte* version = glGetString(GL_VERSION); const GLubyte* glslVersion = glGetString(GL_SHADING_LANGUAGE_VERSION); GLint major, minor; glGetIntegerv(GL_MAJOR_VERSION, &major); glGetIntegerv(GL_MINOR_VERSION, &minor); cout << "GL Vendor : " << vendor << endl; cout << "GL Renderer : " << renderer << endl; cout << "GL Version : " << version << endl; cout << "GL Version : " << major << "." << minor << endl; cout << "GLSL Version : " << glslVersion << endl; initShaders(); return 0; } On my video card it gives: Status: Using GLEW 1.7.0 GL Vendor : ATI Technologies Inc. GL Renderer : ATI Radeon HD 4250 GL Version : 3.3.11631 Compatibility Profile Context GL Version : 3.3 GLSL Version : 3.30 Vertex shader not compiled. Log: Vertex shader failed to compile with the following errors: ERROR: 0:1: error(#132) Syntax error: '5' parse error ERROR: error(#273) 1 compilation errors. No code generated Fragment shader not compiled. Log: Fragment shader failed to compile with the following errors: WARNING: 0:1: warning(#402) Implicit truncation of vector from size 3 to size 2. ERROR: 0:1: error(#174) Not enough data provided for construction constructor WARNING: 0:1: warning(#402) Implicit truncation of vector from size 1 to size 3. ERROR: 0:1: error(#132) Syntax error: '=' parse error ERROR: error(#273) 2 compilation errors. No code generated Program log: Vertex and Fragment shader(s) were not successfully compiled before glLinkProgram() was called. Link failed. Or if you like, you could give me other compiler messages than proposed by me. To summarize, the question is: What are GLSL compiler messages formats (INFOs, WARNINGs, ERRORs) for different vendors? Please give me examples or pattern explanation. EDIT: Ok, it seems that this question is too broad, then shortly: How does NVidia's and Intel's GLSL compilers present ERROR and WARNING messages? AMD/ATI uses patterns like this: ERROR: <position>:<line_number>: <message> WARNING: <position>:<line_number>: <message> (examples are above).

    Read the article

  • NAudio demos not working anymore

    - by Kurru
    I just tried to run the NAudio demos and I'm getting a weird error: System.BadImageFormatException: Could not load file or a ssembly 'NAudio, Version=1.3.8.0, Culture=neutral, PublicKeyToken=null' or one o f its dependencies. An attempt was made to load a program with an incorrect form at. File name: 'NAudio, Version=1.3.8.0, Culture=neutral, PublicKeyToken=null' at NAudioWpfDemo.AudioGraph..ctor() at NAudioWpfDemo.ControlPanelViewModel..ctor(IWaveFormRenderer waveFormRender er, SpectrumAnalyser analyzer) in C:\Users\Admin\Downloads\NAudio-1.3\NAudio-1-3 \Source Code\NAudioWpfDemo\ControlPanelViewModel.cs:line 23 at NAudioWpfDemo.MainWindow..ctor() in C:\Users\Admin\Downloads\NAudio-1.3\NA udio-1-3\Source Code\NAudioWpfDemo\MainWindow.xaml.cs:line 15 WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\M icrosoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure lo gging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fus ion!EnableLog]. Since the last time I used NAudio demos I have changed from 32bit Windows XP to 64bit Windows 7. Would this cause this issue? Its very annoying as I was about to try my hand at audio in C# again

    Read the article

  • hibernate3-maven-plugin dependencies for newer version of hibernate

    - by Samuel
    I would like to use hibernate-3.5-1.Final along with this plugin, what should be my dependencies here. It seems to be picking up a older set of jars and failing right now. <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>2.2</version> </plugin> EDIT1: [INFO] class org.hibernate.cfg.ExtendedMappings has interface org.hibernate .cfg.Mappings as super class [INFO] -------------------------------------------------------------------- ---- [INFO] Trace java.lang.IncompatibleClassChangeError: class org.hibernate.cfg.ExtendedMap pings has interface org.hibernate.cfg.Mappings as super class at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:621) at java.security.SecureClassLoader.defineClass(SecureClassLoader.ja va:124) at java.net.URLClassLoader.defineClass(URLClassLoader.java:260) at java.net.URLClassLoader.access$000(URLClassLoader.java:56) at java.net.URLClassLoader$1.run(URLClassLoader.java:195) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at org.codehaus.classworlds.RealmClassLoader.loadClassDirect(RealmC lassLoader.java:195) at org.codehaus.classworlds.DefaultClassRealm.loadClass(DefaultClas sRealm.java:255) at org.codehaus.classworlds.RealmClassLoader.loadClass(RealmClassLo ader.java:214) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) at org.hibernate.cfg.AnnotationConfiguration.createExtendedMappings (AnnotationConfiguration.java:187) at org.hibernate.cfg.AnnotationConfiguration.secondPassCompile(Anno tationConfiguration.java:277) at org.hibernate.cfg.Configuration.buildMappings(Configuration.java :1206) at org.hibernate.ejb.Ejb3Configuration.buildMappings(Ejb3Configurat ion.java:1226) at org.hibernate.ejb.EventListenerConfigurator.configure(EventListe nerConfigurator.java:173) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration. java:854) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration. java:191) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration. java:253)

    Read the article

  • Cannot read values from [NSUserDefaults standardUserDefaults] after synchronize

    - by Nonlinearsound
    In the Application Delegate didFinishLaunching method, I am using the following code to build up a new NSDictionary to be used as the new settings bundle for the user: NSNumber *testValue = (NSNumber*)[[NSUserDefaults standardUserDefaults] objectForKey:@"settingsversion"]; if (testValue == nil) { NSNumber *numNewDB = [NSNumber numberWithBool:NO]; NSNumber *numFirstUse = [NSNumber numberWithBool:YES]; NSDate *dateLastStatic = [NSDate date]; NSDate *dateLastMobile = [NSDate date]; NSNumber *numSettingsversion = [NSNumber numberWithFloat:1.0]; NSDictionary *appDefaults = [NSDictionary dictionaryWithObjectsAndKeys: numNewDB, @"newdb", numFirstUse, @"firstuse", numSettingsversion, @"settingsversion", dateLastStatic, @"laststaticupdate", dateLastMobile, @"lastmobileupdate", nil]; [[NSUserDefaults standardUserDefaults] registerDefaults:appDefaults]; [[NSUserDefaults standardUserDefaults] synchronize]; } Later in another ViewController I am trying to read back a value from that same Dictionary, saved as the NSUserDefaults - well at least I thought it would, but I don't get any valid object pointer for the desired member lastUpdate back there: in .h file: NSDate *lastUpdate; in the .m file in a member function: lastUpdate = (NSDate *)[[NSUserDefaults standardUserDefaults] objectForKey:@"laststaticupdate"]; Even, if I print out the content of [NSUserDefaults standardUserDefaults] I only get this: 2010-04-29 15:13:22.322 myApp[4136:207] Content of UserDefaults: <NSUserDefaults: 0x11d340> This leads me to the conclusion that there is no standardUserDefaults dictionary somewhere in memory or it cannot be determined as such a structure. Edit: Every time, I restart the app ión the device, the check for testValue is Nil and I am building up the dictionary again but after one run it should be persistent on the Phone, right? Am I doing something wrong somewhere in between? I have the feeling that I yet didn't really understand how to load and save settings persistent for a certain application on the iPhone. Is there anything I have to do additionally to this? Integrating a settings.bundle in XCode or saving the dictionary manually to the Documents folder? Can someone help me out here? Thanks a lot!

    Read the article

  • Call to undefined function 'Encrypt' - Attempting to Link OMF Lib

    - by Changeling
    I created a DLL using Visual Studio 2005 VC++ and marked a function for export (for testing). I then took the .LIB file created, and ran it through the COFF2OMF converter program bundled with Borland C++ Builder 5 and it returns the following: C:\>coff2omf -v -lib:ms MACEncryption.lib MACEncryption2.lib COFF to OMF Converter Version 1.0.0.74 Copyright (c) 1999, 2000 Inprise Corporat ion Internal name Imported name ------------- ------------- ??0CMACEncryptionApp@@QAE@XZ ?Decrypt@CMACEncryptionApp@@QAEXXZ Encrypt Encrypt@0 I added the MACEncryption2.lib file to my C++ Builder 5 Project by going to Project-Add to Project.. and selecting the library. The application links, but it cannot find the Encrypt function that I am declaring for export as follows in the VC++ code: extern "C" __declspec(dllexport) BSTR* __stdcall Encrypt() { CoInitialize(NULL); EncryptionManager::_EncryptionManagerPtr pDotNetCOMPtr; HRESULT hRes = pDotNetCOMPtr.CreateInstance(EncryptionManager::CLSID_EncryptionManager); if (hRes == S_OK) { BSTR* str = new BSTR; BSTR filePath = (BSTR)"C:\\ICVER001.REQ"; BSTR encrypt = (BSTR)"\"test"; pDotNetCOMPtr->EncryptThirdPartyMessage(filePath, encrypt, str); return str; } return NULL; CoUninitialize (); } C++ Builder Code: __fastcall TForm1::TForm1(TComponent* Owner) : TForm(Owner) { Encrypt(); } (Yes I know I am encapsulating another DLL.. I am doing this for a reason since Borland can't 'see' the .NET DLL definitions) Can anyone tell me what I am doing wrong so I can figure out why Builder cannot find the function Encrypt() ?

    Read the article

  • PHP URL parameters append return special character

    - by Alexandre Lavoie
    I'm programming a function to build an URL, here it is : public static function requestContent($p_lParameters) { $sParameters = "?key=TEST&format=json&jsoncallback=none"; foreach($p_lParameters as $sParameterName => $sParameterValue) { $sParameters .= "&$sParameterName=$sParameterValue"; } echo "<span style='font-size: 16px;'>URL : http://api.oodle.com/api/v2/listings" . $sParameters . "</span><br />"; $aXMLData = file_get_contents("http://api.oodle.com/api/v2/listings" . $sParameters); return json_decode($aXMLData,true); } And I am calling this function with this array list : print_r() result : Array ( [region] => canada [category] => housing/sale/home ) But this is very strange I get an unexpected character (note the special character none*®*ion) : http://api.oodle.com/api/v2/listings?key=TEST&format=json&jsoncallback=none®ion=canada&category=housing/sale/home For information I use this header : <meta http-equiv="Content-Type" content="text/html;charset=UTF-8" /> <?php header('Content-Type: text/html;charset=UTF-8'); ?> EDIT : $sRequest = "http://api.oodle.com/api/v2/listings?key=TEST&format=json&jsoncallback=none&region=canada&category=housing/sale/home"; echo "<span style='font-size: 16px;'>URL : " . $sRequest . "</span><br />"; return the exact URL with problem : http://api.oodle.com/api/v2/listings?key=TEST&format=json&jsoncallback=none®ion=canada&category=housing/sale/home Thank you for your help!

    Read the article

  • Trouble parsing self closing XML tags using SAX parser

    - by sandesh
    Hi, I am having trouble parsing self closing XML tags using SAX. I am trying to extract the link tag from the Google Base API.I am having reasonable success in parsing regular tags. Here is a snippet of the xml <entry> <id>http://www.google.com/base/feeds/snippets/15802191394735287303</id> <published>2010-04-05T11:00:00.000Z</published> <updated>2010-04-24T19:00:07.000Z</updated> <category scheme='http://base.google.com/categories/itemtypes' term='Products'/> <title type='text'>En-el1 Li-ion Battery+charger For Nikon Digital Camera</title> <link rel='alternate' type='text/html' href='http://rover.ebay.com/rover/1/711-67261-24966-0/2?ipn=psmain&amp;icep_vectorid=263602&amp;kwid=1&amp;mtid=691&amp;crlp=1_263602&amp;icep_item_id=170468125748&amp;itemid=170468125748'/> . . and so on I can parse the updates and published tags, but not the link and category tag. Here is my startElement and endElement overrides public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException { if (qName.equals("title") && xmlTags.peek().equals("entry")) { insideEntryTitle = true; } xmlTags.push(qName); } public void endElement(String uri, String localName, String qName) throws SAXException { // If a "title" element is closed, we start a new line, to prepare // printing the new title. xmlTags.pop(); if (insideEntryTitle) { insideEntryTitle = false; System.out.println(); } } declaration for xmltags.. private Stack<String> xmlTags = new Stack<String>(); Any help guys? this is my first post here.. I hope I have followed posting rules! thanks a ton guys..

    Read the article

  • Best way to setup multiple monitors?

    - by terrani
    Hello, I currently have 5 displays. The following is how I installed them. Three 19'' for workspace. One of them is connect directly to the graphic card via DVI. Two of them are using usb graphic adapters. One 720 projector - directly connect to the graphic card via DVI. One 30'' dell monitor - currently connect to laptop via VGA. I would like to connect 30' dell to my main computer. I do not game or use graphic applications. What would be my best options? I am thinking to buy two lower-performance ati or nvidia cards and install them as crossfire(???) or sli. Am I thinking correctly?

    Read the article

  • Load Average runaway

    - by mewrei
    Is there any way to chase down lockups and runaway load averages? Every so often (pretty randomly) I'll get my load average spike up over 5 usually to around 10-15 and sometimes as high as 75 (dual core machine), and cause my system to lock for an indeterminate amount of time. The only thing I can possibly chase it to is using nVidia fakeraid (RAID-1) with JFS on top of that for my /home partition. Also I noticed that when my load averages spike, the power management system doesn't step up my processor speed from 1.6 to its maximum 2.13Ghz clock speed (not sure if this makes a huge difference with this problem). Any ideas?

    Read the article

  • Hardware selection for Linux machine

    - by bguiz
    Hi, I am building a new box, and planning to install Ubuntu 9-04 or Ubuntu 9-10 on it. I am wary of the hardware selection because in the past I struggled with lack of drivers or driver incompatibility with the network card and video card, etc. The last time I built a Linux box was 2007, and I have not kept up to date with the changes since. One notable difference is that I can no longer find motherboards with nVidia chip sets. See what I mean (links to my local shop's website): Intel motherboards: http://www.centrecom.com.au/catalog/default.php?page=1&cPath=36_62 AMD motherboards: http://www.centrecom.com.au/catalog/default.php?page=1&cPath=36_63 I have already checked the Ubuntu forums, but their motherboards section is rather outdated, and I did not look further. I would like to know your suggestions for what Linux compatible hardware that you have got. Thank you!

    Read the article

  • Win7 glass-like Aero feature - unable to enable

    - by user24752
    I have a 4 months old PC with Win7 Home Premium x64. Windows Experience Index is 5.4. Intel i5 processor 6GB memory nVidia GT220 video card. During games, Windows reported shortage of system resources, so switched the desktop back to "Windows 7 Basic" desktop theme. After game-over, I could switch back to the normal theme and enjoy all Aero eye-candies. However, lately the glass-like window transparency feature got disabled, and I found no ways to enable it again. There is a Troubleshouting option in Control Panel saying: "Find and fix problems with transparency and other visual effects". If I launch that, it does not find anything. Event viewer is full with the following warnings: The Desktop Window Manager is experiencing heavy resource contention. Scenario : The Desktop Window Manager responsiveness has degraded. Taskbar, window borders, etc, none of the other transparent features work, and I cannot turn them on. Any thoughts?

    Read the article

  • Windows Vista Update Now Wont Boot Up

    - by thatryan
    My friend just updated her Windows Vista to service pack 1, or tried to. Now it wont boot up. Just black screen, some errors etc. I tried googling it and lots of people had this problem it seems. Anyone find a fix for it? I read somewhere I believe that Microsoft said to delete some files, Nvidia maybe? But I can not find that again, I forgot the exact error code I searched for before. Does anyone know what I am talking about? LOL Thanks guys.

    Read the article

  • Windows Vista Update Now Wont Boot Up

    - by thatryan
    My friend just updated her Windows Vista to service pack 1, or tried to. Now it wont boot up. Just black screen, some errors etc. I tried googling it and lots of people had this problem it seems. Anyone find a fix for it? I read somewhere I believe that Microsoft said to delete some files, Nvidia maybe? But I can not find that again, I forgot the exact error code I searched for before. Does anyone know what I am talking about? LOL Thanks guys.

    Read the article

  • Acer Aspire ASE700-UQ660A will not respond to power button

    - by Tim R.
    This is something of a continuation of this question. I am now completely unable to boot this computer. The last time I used it, I used hibernation mode. When I needed to use it again, it would not respond at all to the power button, keyboard, or mouse. I tried: Holding down the power button for 15 seconds pressing the power button Unplugging the power cord for 30 seconds, plugging it back in, and trying again Removing the motherboard battery for over a minute and reinstalling it Before removing the motherboard battery, none of the lights on the front of the computer lit up. After reinstalling the battery and plugging the power cord back in, the light behind the power button is constantly illuminated (without even pressing the power button), but there is still no response to the power button, no fans turned on, nothing else that would indicate that it is running. System: Acer Aspire ASE700-UQ660A (Specs should be all factory defaults except:) 4 GB RAM Nvidia GeForce 8600 GT with driver version 197.45 Windows 7 Professional 64 bit

    Read the article

  • Macbook Pro suddenly lagging video playback + Flash sites

    - by Mathias
    I have a Macbook Pro, OSx Lion, Intel Core2 Duo, 4GB Ram, NVidia Geforce 8600M GT 128 MB Ram, Intel x25m SSD. Approximately 4 years old. I've been running Flash sites and playing videos without any problems for years. Then suddenly 3 months ago, a flash site like http://thefwa.com is lagging in all browsers. Even mouseover animations - anything. Also video playback in e.g VLC and Quicktime is now lagging. Same videos I used before, I tried installing an older version of VLC without any luck. Playing back video in VLC utilizes the CPU almost 100%, and Flash sites like thefwa.com easily takes up 50-60%. It's as if the hardware acceleration stopped working, or the GPU lost its magic. UPDATE: Same issues also occurred on Snow Leopard Has anyone experienced something similar, or do you know what might be wrong?

    Read the article

  • Seeking glass lcd montiors with LED backlight

    - by dlamblin
    The only LCD monitors with glass fronts and LED back-lighting I can find are the ones by Apple. And they only sell a 24" one at 2.4x the price of any other 24" monitor at 1920x1200, and a 30" one, which honestly I can't put on my desk. Oh, and the 24" one uses a mini-display port plug only. So I'd be out of luck until there's display side adapter available. I am generally looking for a 16:10 or 4:3 rather than 16:9 monitor. It would be awesome if someone could find another, cheaper, monitor that isn't fronted by a plastic film, but rather with glass. It would be double awesome if said monitor was also 120hz so that I can use nVidia's 3D goggles. Update: One month and 16 days later I seem to not be the only one that can't find another glass based computer lcd monitor. LED backlighting is available though.

    Read the article

  • Windows 7: blue faces in VLC... sometimes

    - by Mala
    Hi There are a bunch of forum posts about this issue from the last several years but no resolution that I could find. I have the newest VLC installed on Windows 7 with the newest nvidia drivers. Suddenly, VLC plays some videos in such a way as faces are blue. When viewed in another media player, or when looking at the thumbnails, this is not an issue. Other videos are not affected. I've tried resetting VLC options, deleting the VLC folder in the AppData area, etc. I have no other color issues. Does anyone have a fix?

    Read the article

  • Dell Latitude e6520 gaming capabilities?

    - by user1072185
    I am purchasing a very reduced Dell Latitude e6520 from a friend and was wondering what kinds of games I could play on it and what resolutions. I'm not buying it for gaming purposes, but I am curious. The specs are as follows: Intel® Core™ i7-2720QM (2.20GHz, 6M cache) with Turbo BoostTechnology 2.0 nVidia® NVS™ 4200M 512MB DDR3 Discrete Graphics for Quad Core upon further research, the CPU requires this graphic card. 4.0GB, DDR3-1333MHz SDRAM 500GB 7200rpm Hard Drive I may upgrade to 8GB memory, depending if I am bogged down, what do you guys think?

    Read the article

  • Selection between two laptops for casual gaming [closed]

    - by Prabhpreet
    I have selected two laptops that meet my budget. Here are the differences: Laptop #1: 4GB Ram, Intel Core i5 2450M 2nd Gen processor w/ 2.5 Ghz clockspeed NVIDIA GeForce GT 520MX DDR3 1 GB Dedicated Graphics 750 GB SATA II Hard disk USB 2.0 Ports 6 hrs battery life Laptop #2: 4 GB Ram, Intel Core i5 3210M 3rd Gen processor w/ 2.5 Ghz clockspeed Integrated Intel HD Graphics 4000 500 GB SATA Hard disk USB 3.0 Ports 3 hrs battery life (This concerns me) First and foremost, does dedicated graphics matter for a casual gamer like me? Secondly, does the generation of the processors make a difference despite the same clockspeed. Thirdly, do usb 3.0 ports make a difference? And lastly, which laptop is more future proof? Please help me out. Thanks!

    Read the article

  • How can I stop Ubuntu from playing audio from 2 interfaces at the same time?

    - by Solignis
    Hi there, I just loaded Ubuntu 10.10 Maverick on my home machine. The machine is running a Core2Duo E6750 on an MSI motherboard with an Nvidia GTX260-OC Graphics card. The problem I am having as stated in the title is for some reason Ubuntu is playing audio through my headphone coming out from the computer and it is also playing the audio at the exact same time through the HDMI connection coming out of the graphics card, it has a plug to allow this. What is going on, I have never seen this before. Most importantly of all can it be fixed so that I can sepertate the 2 interfaces, the one is a standard PC audio IO and the HDMI one is connected through the mobo's internal SPDIF. More information can be provided if required.

    Read the article

  • Windows 7 using exactly HALF the installed memory

    - by Nathan Ridley
    I've taken this directly from system information: Installed Physical Memory (RAM) 4.00 GB Total Physical Memory 2.00 GB Available Physical Memory 434 MB Total Virtual Memory 5.10 GB Available Virtual Memory 1.19 GB Page File Space 3.11 GB Also the BIOS reports a full 4GB available. Note the 4gb installed, yet 2gb total. I understand that on a 32 bit operating system, you'll never get the full 4gb of ram, however typically you'll get in the range of 2.5-3.2gb of ram. I have only 2gb available! My swap file goes nuts when I do anything! Note that I have dual SLI nvidia video cards, each with 512mb of on board ram, though I have the SLI feature turned off. Anybody know why Windows might claim that I have exactly 2gb of ram total? Note: previously asked on SuperUser, but closed as "belongs on superuser" before this site opened: http://serverfault.com/questions/39603/windows-7-using-exactly-half-the-installed-memory (I still need an answer!)

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >