Search Results

Search found 3137 results on 126 pages for 'digital signature'.

Page 111/126 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Is the Scala 2.8 collections library a case of "the longest suicide note in history" ?

    - by oxbow_lakes
    First note the inflammatory subject title is a quotation made about the manifesto of a UK political party in the early 1980s. This question is subjective but it is a genuine question, I've made it CW and I'd like some opinions on the matter. Despite whatever my wife and coworkers keep telling me, I don't think I'm an idiot: I have a good degree in mathematics from the University of Oxford and I've been programming commercially for almost 12 years and in Scala for about a year (also commercially). I have just started to look at the Scala collections library re-implementation which is coming in the imminent 2.8 release. Those familiar with the library from 2.7 will notice that the library, from a usage perspective, has changed little. For example... > List("Paris", "London").map(_.length) res0: List[Int] List(5, 6) ...would work in either versions. The library is eminently useable: in fact it's fantastic. However, those previously unfamiliar with Scala and poking around to get a feel for the language now have to make sense of method signatures like: def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That For such simple functionality, this is a daunting signature and one which I find myself struggling to understand. Not that I think Scala was ever likely to be the next Java (or /C/C++/C#) - I don't believe its creators were aiming it at that market - but I think it is/was certainly feasible for Scala to become the next Ruby or Python (i.e. to gain a significant commercial user-base) Is this going to put people off coming to Scala? Is this going to give Scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off? Was the library re-design a sensible idea? If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens? Steve Yegge once attacked Scala (mistakenly in my opinion) for what he saw as its overcomplicated type-system. I worry that someone is going to have a field day spreading fud with this API (similarly to how Josh Bloch scared the JCP out of adding closures to Java). Note - I should be clear that, whilst I believe that Josh Bloch was influential in the rejection of the BGGA closures proposal, I don't ascribe this to anything other than his honestly-held beliefs that the proposal represented a mistake.

    Read the article

  • Turtle Graphics as a Haskell Monad

    - by iliis
    I'm trying to implement turtle graphis in Haskell. The goal is to be able to write a function like this: draw_something = do fordward 100 right 90 forward 100 ... and then have it produce a list of points (maybe with additional properties): > draw_something (0,0) 0 -- start at (0,0) facing east (0 degrees) [(0,0), (0,100), (-100,100), ...] I have all this working in a 'normal' way, but I fail to implement it as a Haskell Monad and use the do-notation. The basic code: data State a = State (a, a) a -- (x,y), angle deriving (Show, Eq) initstate :: State Float initstate = State (0.0,0.0) 0.0 -- constrain angles to 0 to 2*pi fmod :: Float -> Float fmod a | a >= 2*pi = fmod (a-2*pi) | a < 0 = fmod (a+2*pi) | otherwise = a forward :: Float -> State Float -> [State Float] forward d (State (x,y) angle) = [State (x + d * (sin angle), y + d * (cos angle)) angle] right :: Float -> State Float -> [State Float] right d (State pos angle) = [State pos (fmod (angle+d))] bind :: [State a] -> (State a -> [State a]) -> [State a] bind xs f = xs ++ (f (head $ reverse xs)) ret :: State a -> [State a] ret x = [x] With this I can now write > [initstate] `bind` (forward 100) `bind` (right (pi/2)) `bind` (forward 100) [State (0.0,0.0) 0.0,State (0.0,100.0) 0.0,State (0.0,100.0) 1.5707964,State (100.0,99.99999) 1.5707964] And get the expected result. However I fail to implement this as an instance of Monad. instance Monad [State] where ... results in `State' is not applied to enough type arguments Expected kind `*', but `State' has kind `* -> *' In the instance declaration for `Monad [State]' And if I wrap the list in a new object data StateList a = StateList [State a] instance Monad StateList where return x = StateList [x] I get Couldn't match type `a' with `State a' `a' is a rigid type variable bound by the type signature for return :: a -> StateList a at logo.hs:38:9 In the expression: x In the first argument of `StateList', namely `[x]' In the expression: StateList [x] I tried various other versions but I never got it to run as I'd like to. What am I doing wrong? What do I understand incorrectly?

    Read the article

  • Unable to add SSH pub key for GitHub

    - by Kaushik
    I am trying to set up a new GitHub account as part of learning ruby on rails. My OS is windows. I am having the following problem while trying to add my public key to the GitHub SSH public keys. When I put the key in the text area, supply a name, and click 'Add Key', I am taken to the Job profile page without any confirmation that the key has been added.(I am using SSH client GUI to generate RSA keys). When I come back to the SSH public keys page, I see that the key is not there. I tried this multiple times...without any result. Just as a test, I tried to ssh to the GitHub account using 'ssh -v [email protected]' and here is the verbose output: OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug1: identity file /c/Users/Administrator/.ssh/identity type -1 debug1: identity file /c/Users/Administrator/.ssh/id_rsa type -1 debug1: identity file /c/Users/Administrator/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5github2 debug1: match: OpenSSH_5.1p1 Debian-5github2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'github.com' is known and matches the RSA host key. debug1: Found key in /c/Users/Administrator/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/Administrator/.ssh/identity debug1: Trying private key: /c/Users/Administrator/.ssh/id_rsa debug1: Trying private key: /c/Users/Administrator/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). Also, why is it looking for the keys in c/Users/Administrator/.ssh/ Is there a possibility of changing this default location? EDIT: With Mozila, I get error msg: Oops! The key has already been taken. The key is invalid. It must begin with 'ssh-rsa' or 'ssh-dss'. My key looks like: ---- BEGIN SSH2 PUBLIC KEY ---- Comment: "[2048-bit rsa, Administrator@Kaushik-THINK, Sun Jan 02 2011 \ 02:40:03]" AAAAB3NzaC1yc2EAAAADAQABAAABAQDoA5xqJozKmAHTGh9hgmagsFOl2hdPzS8ZPV9Ta1 xU0JiUnHef38Rvz/5oqL1wW7SjmZbc/+tCPOfU1lg3UisFXajJhek2bjJ2qsKd4Sjax2Nj ZMYD7djPb8rokUYSKW3bdYyJHtJH+murz04UGdCcZ8HqkMTzqlh3zAIK7SIlCy+mtAi5A/ sm0JbqlNGHB6YrL1aWIcOSolIx2HWt8cWhlK77guT9dPgd0HT59Gn0uhO7UWGLrNdJut0x ian3HdvNYMUXoO/SkNlxvWRgZ1UOhaB+qf4hw5RCGcBbqP3fM4LKpurHZx4wEpgmM0e4EM +2PYdf46uxChNdBl7J5sZF ---- END SSH2 PUBLIC KEY ---- I still can't see the key..so not sure how it says it is already taken.

    Read the article

  • Do RAID controllers commonly have SATA drive brand compatibility issues?

    - by Jeff Atwood
    We've struggled with the RAID controller in our database server, a Lenovo ThinkServer RD120. It is a rebranded Adaptec that Lenovo / IBM dubs the ServeRAID 8k. We have patched this ServeRAID 8k up to the very latest and greatest: RAID bios version RAID backplane bios version Windows Server 2008 driver This RAID controller has had multiple critical BIOS updates even in the short 4 month time we've owned it, and the change history is just.. well, scary. We've tried both write-back and write-through strategies on the logical RAID drives. We still get intermittent I/O errors under heavy disk activity. They are not common, but serious when they happen, as they cause SQL Server 2008 I/O timeouts and sometimes failure of SQL connection pools. We were at the end of our rope troubleshooting this problem. Short of hardcore stuff like replacing the entire server, or replacing the RAID hardware, we were getting desperate. When I first got the server, I had a problem where drive bay #6 wasn't recognized. Switching out hard drives to a different brand, strangely, fixed this -- and updating the RAID BIOS (for the first of many times) fixed it permanently, so I was able to use the original "incompatible" drive in bay 6. On a hunch, I began to assume that the Western Digital SATA hard drives I chose were somehow incompatible with the ServeRAID 8k controller. Buying 6 new hard drives was one of the cheaper options on the table, so I went for 6 Hitachi (aka IBM, aka Lenovo) hard drives under the theory that an IBM/Lenovo RAID controller is more likely to work with the drives it's typically sold with. Looks like that hunch paid off -- we've been through three of our heaviest load days (mon,tue,wed) without a single I/O error of any kind. Prior to this we regularly had at least one I/O "event" in this time frame. It sure looks like switching brands of hard drive has fixed our intermittent RAID I/O problems! While I understand that IBM/Lenovo probably tests their RAID controller exclusively with their own brand of hard drives, I'm disturbed that a RAID controller would have such subtle I/O problems with particular brands of hard drives. So my question is, is this sort of SATA drive incompatibility common with RAID controllers? Are there some brands of drives that work better than others, or are "validated" against particular RAID controller? I had sort of assumed that all commodity SATA hard drives were alike and would work reasonably well in any given RAID controller (of sufficient quality).

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • Seeking STL-aware c++filt

    - by Don Wakefield
    In my development environment, I'm compiling a code base using GNU C++ 3.4.6. Code is under development, and unfortunately crashes now and then. It's nice to be able to run the traceback through a demangler, and I use c++filt 3.4. The problem comes when functions have a number of STL parameters. Consider My_callback::operator()( Status&, std::set<std::string> const&, std::vector<My_parameter*> const&, My_attribute_set const&, std::vector<My_parameter_base*> const&, std::vector<My_parameter> const&, std::set<std::string> const& ) { // ... } When this function is in the traceback, the mangled output on my platform is: (_ZN30My_callbackclER11StatusRKSt3setISsSt4lessISsESaISsEERKSt6vectorIP13My_parameterSaISB_EERK17My_attribute_setRKS9_IP18My_parameter_baseSaISK_EERKS9_ISA_SaISA_EES8_+0x76a) [0x13ffdaa] c++filt kindly demangles it to (My_callback::operator()(Status&, std::set<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, std::vector<My_parameter*, std::allocator<My_parameter*> > const&, My_attribute_set const&, std::vector<My_parameter_base*, std::allocator<My_parameter_base*> > const&, std::vector<My_parameter, std::allocator<My_parameter> > const&, std::set<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)+0x76a) [0x13ffdaa] This is the same problem as compiler errors encountered when using templates. However, the STL is a fairly regular and recognizable package of templates. So what I'm hoping is that someone out there has created an enhanced version of c++filt which would dump something closer to the original function signature. Any hints?

    Read the article

  • ASA and cisco vs NSA sonic firewall

    - by Lbaker101
    Currently I’m trying to structure our network to fully support and be redundant with BGP/Multi homing. Our current company size is 40 employees but the major part of that is our Development department. We are a software company and continued connection to the internet is a requirement as 90% of work stops when the net goes down. The only thing hosted on site (that needs to remain up) is our exchange server. Right now i'm faced with 2 different directions and was wondering if I could get your opinions on this. We will have 2 ISPs that are both 20meg up/down and dedicated fiber (so 40megs combined). This is handed off as an Ethernet cable into our server room. ISP#1 first digital ISP#2 CenturyLink we currently have 2x ASA5505s but the 2nd one is not in use. It was there to be a failover and it just needs the security+ license to be matched with the primary device. But this depends on the network structure. I have been looking into the hardware that would be required to be fully redundant and I found that we will either of the following. 2x Cisco 2921+ series routers with failover licenses. They will go in front of the ASAs and either connects in a failover state or 1 ISP into each of the 2921 series routers and then 1 line into each of the ASAs (thus all 4 hardware components will be used actively). So 2x Cisco 2921+ series routers 2x Cisco ASA5505 firewalls The other route 2x SonicWalls NSA2400MX series. 1 primary and the secondary will be in a failover state. This will remove the ASAs from the network and be about 2k cheaper than the cisco route. This also brings down the points of failure because it’s just the 2x sonicwalls It will also allow us to scale all the way up to 200-400 users (depending on their configuration). This also makes so the Sonic walls. So the real question is with the added functionality ect of the sonicwall is there a point in paying so much more to stay the cisco route? Thanks!

    Read the article

  • PCI-DSS compliance for business with only swipe terminals [migrated]

    - by rowatt
    I support the IT infrastructure for a small retail business which is now required to undergo a PCI-DSS assessment. The payment service and terminal provider (Streamline) has asked that we use Trustwave to do the PCI-DSS certification. The problem I face is that if I answer all questions and follow Trustwave's requirements to the letter, we will have to invest significantly in networking equipment to segment LANs and /or do internal vulnerability scanning, while at the same time Streamline assures me that the terminals we have (Verifone VX670-B and MagIC3 X-8) are secure, don't store any credit card information and are PCI-DSS compliant so by implication we don't need to take any action to ensure their network security. I'm looking for any suggestions as to how we can most easily meet the networking requirements for PCI-DSS. Some background on our current network setup: single wired LAN, also with WiFi turned on (though if this creates any PCI-DSS complexities we can turn it off). single Netgear ADSL router. This is the only firewall we have in place, and the firewall is out the box configuration (i.e. no DMZ, SNMP etc). Passwords have been changed though :-) a few windows PCs and 2 windows based tills, none of which ever see any credit card information at all. two swipe terminals. Until a few months ago (before we were told we had to be PCI-DSS certified) these terminals did auth/capture over the phone. Streamline suggested we moved to their IP Broadband service, which instead uses an SSL encrypted channel over the internet to do auth/capture, so we now use that service. We don't do any ecommerce or receive payments over the internet. All transactions are either cardholder present, or MOTO with details given over phone and typed direct into terminal. We're based in the UK. As I currently understand it we have three options in order to get PCI-DSS certification. segment our network so the POS terminals are isolated from all PCs, and set up internal vulnerability scanning on that network. don't segment the network, and have to do more internal scanning and have more onerous management of PCs than I think we need (for example, though the tills are Windows based, they are fully managed so I have no control over software update policies, anti virus etc). All PCs have anti virus (MSE) and windows updates automatically applied, but we don't have any centralised go back to auth/capture over phone lines. I can't imagine we are the first merchant to be in this situation. I'm looking for any recommendations a simple, cost effective way to be PCI-DSS compliant - either by doing 1 or 2 above with (hopefully) simple and inexpensive equipment/software, or any other ways if there's a better way to do this. Or... should we just go back to the digital stone age and do auth/capture over the phone, which means we don't need to do anything on our network to be PCI-DSS certified?

    Read the article

  • SWIG & Java Use of carrays.i and array_functions for C Array of Strings

    - by c12
    I have the below configuration where I'm trying to create a test C function that returns a pointer to an Array of Strings and then wrap that using SWIG's carrays.i and array_functions so that I can access the Array elements in Java. Uncertainties: %array_functions(char, SWIGArrayUtility); - not sure if char is correct inline char *getCharArray() - not sure if C function signature is correct String result = getCharArray(); - String return seems odd, but that's what is generated by SWIG SWIG.i: %module Test %{ #include "test.h" %} %include <carrays.i> %array_functions(char, SWIGArrayUtility); %include "test.h" %pragma(java) modulecode=%{ public static char[] getCharArrayImpl() { final int num = numFoo(); char ret[] = new char[num]; String result = getCharArray(); for (int i = 0; i < num; ++i) { ret[i] = SWIGArrayUtility_getitem(result, i); } return ret; } %} Inline Header C Function: #ifndef TEST_H #define TEST_H inline static unsigned short numFoo() { return 3; } inline char *getCharArray(){ static char* foo[3]; foo[0]="ABC"; foo[1]="5CDE"; foo[2]="EEE6"; return foo; } #endif Java Main Tester: public class TestMain { public static void main(String[] args) { System.loadLibrary("TestJni"); char[] test = Test.getCharArrayImpl(); System.out.println("length=" + test.length); for(int i=0; i < test.length; i++){ System.out.println(test[i]); } } } Java Main Tester Output: length=3 ? ? , SWIG Generated Java APIs: public class Test { public static String new_SWIGArrayUtility(int nelements) { return TestJNI.new_SWIGArrayUtility(nelements); } public static void delete_SWIGArrayUtility(String ary) { TestJNI.delete_SWIGArrayUtility(ary); } public static char SWIGArrayUtility_getitem(String ary, int index) { return TestJNI.SWIGArrayUtility_getitem(ary, index); } public static void SWIGArrayUtility_setitem(String ary, int index, char value) { TestJNI.SWIGArrayUtility_setitem(ary, index, value); } public static int numFoo() { return TestJNI.numFoo(); } public static String getCharArray() { return TestJNI.getCharArray(); } public static char[] getCharArrayImpl() { final int num = numFoo(); char ret[] = new char[num]; String result = getCharArray(); System.out.println("result=" + result); for (int i = 0; i < num; ++i) { ret[i] = SWIGArrayUtility_getitem(result, i); System.out.println("ret[" + i + "]=" + ret[i]); } return ret; } }

    Read the article

  • How to define generic super type for static factory method?

    - by Esko
    If this has already been asked, please link and close this one. I'm currently prototyping a design for a simplified API of a certain another API that's a lot more complex (and potentially dangerous) to use. Considering the related somewhat complex object creation I decided to use static factory methods to simplify the API and I currently have the following which works as expected: public class Glue<T> { private List<Type<T>> types; private Glue() { types = new ArrayList<Type<T>>(); } private static class Type<T> { private T value; /* some other properties, omitted for simplicity */ public Type(T value) { this.value = value; } } public static <T> Glue<T> glueFactory(String name, T first, T second) { Glue<T> g = new Glue<T>(); Type<T> firstType = new Glue.Type<T>(first); Type<T> secondType = new Glue.Type<T>(second); g.types.add(firstType); g.types.add(secondType); /* omitted complex stuff */ return g; } } As said, this works as intended. When the API user (=another developer) types Glue<Horse> strongGlue = Glue.glueFactory("2HP", new Horse(), new Horse()); he gets exactly what he wanted. What I'm missing is that how do I enforce that Horse - or whatever is put into the factory method - always implements both Serializable and Comparable? Simply adding them to factory method's signature using <T extends Comparable<T> & Serializable> doesn't necessarily enforce this rule in all cases, only when this simplified API is used. That's why I'd like to add them to the class' definition and then modify the factory method accordingly. PS: No horses (and definitely no ponies!) were harmed in writing of this question.

    Read the article

  • Hide a base class method from derived class, but still visible outside of assembly

    - by clintp
    This is a question about tidyness. The project is already working, I'm satisfied with the design but I have a couple of loose ends that I'd like to tie up. My project has a plugin architecture. The main body of the program dispatches work to the plugins that each reside in their own AppDomain. The plugins are described with an interface, which is used by the main program (to get the signature for invoking DispatchTaskToPlugin) and by the plugins themselves as an API contract: namespace AppServer.Plugin.Common { public interface IAppServerPlugin { void Register(); void DispatchTaskToPlugin(Task t); // Other methods omitted } } In the main body of the program Register() is called so that the plugin can register its callback methods with the base class, and then later DispatchTaskToPlugin() is called to get the plugin running. The plugins themselves are in two parts. There's a base class that implements the framework for the plugin (setup, housekeeping, teardown, etc..). This is where DispatchTaskToPlugin is actually defined: namespace AppServer.Plugin { abstract public class BasePlugin : MarshalByRefObject, AppServer.Plugin.Common.IAppServerPlugin { public void DispatchTaskToPlugin(Task t) { // ... // Eventual call to actual plugin code // } // Other methods omitted } } The actual plugins themselves only need to implement a Register() method (to give the base class the delegates to call eventually) and then their business logic. namespace AppServer.Plugin { public class Plugin : BasePlugin { override public void Register() { // Calls a method in the base class to register itself. } // Various callback methods, business logic, etc... } } Now in the base class (BasePlugin) I've implemented all kinds of convenience methods, collected data, etc.. for the plugins to use. Everything's kosher except for that lingering method DispatchTaskToPlugin(). It's not supposed to be callable from the Plugin class implementations -- they have no use for it. It's only needed by the dispatcher in the main body of the program. How can I prevent the derived classes (Plugin) from seeing the method in the base class (BasePlugin/DispatchTaskToPlugin) but still have it visible from outside of the assembly? I can split hairs and have DispatchTaskToPlugin() throw an exception if it's called from the derived classes, but that's closing the barn door a little late. I'd like to keep it out of Intellisense or possibly have the compiler take care of this for me. Suggestions?

    Read the article

  • Getting RGB values for each pixel from a 24bpp Bitmap in C

    - by seven
    Hello, i want to read the RGB values for each pixel from a .bmp file , so i can convert the bmp into a format suitable for gba . so i need to get just the RGB for each pixel and then write this information to a file. i am trying to use the windows.h structures : typedef struct { char signature[2]; unsigned int fileSize; unsigned int reserved; unsigned int offset; }BmpHeader; typedef struct { unsigned int headerSize; unsigned int width; unsigned int height; unsigned short planeCount; unsigned short bitDepth; unsigned int compression; unsigned int compressedImageSize; unsigned int horizontalResolution; unsigned int verticalResolution; unsigned int numColors; unsigned int importantColors; }BmpImageInfo; typedef struct { unsigned char blue; unsigned char green; unsigned char red; unsigned char reserved; }Rgb; typedef struct { BmpHeader header; BmpImageInfo info; Rgb colors[256]; unsigned short image[1]; }BmpFile; but i only need RGB struct. So lets say i read "in.bmp": FILE *inFile, *outFile; inFile = fopen("C://in.bmp", "rb"); Rgb Palette[256]; for(i=0;i<256;i++) { fread(&Palette[i],sizeof(Rgb),1,inFile); } fclose(inFile); is this correct ? how do i write only the RGB information to a file ? can anyone please give me some information please . Thank you.

    Read the article

  • Loss of wireless network connectivity when playing video via HDMI cable

    - by Jeff Fohl
    Hi Folks - New to Super User, so I hope this question fits in with the guidelines. Very strange problem I am having, and I am at a loss as to how to continue troubleshooting this one. The basic problem is that when I attempt to watch streamed video on a particular display device (an Optoma HD180 projector), my network connectivity drops like a stone to barely measurable levels. This is my setup: I have a Dell H2C 730x running Windows 7 64bit. This particular computer has two ATI Radeon HD 4800 video cards. I have two Samsung 22" monitors connected to one card, and an Optoma HD180 digital projector connected to the other card via an HDMI cable. My internet connection is normally a reliable 6Mbps. The problem I am having occurs when I stream video (or even just browse the web) on the Optoma Projector. When I do this, my internet connection drops to practically zero (just a few kilobits per second). When I move the browser away from the projector, and over to one of my Samsung monitors, the internet connection comes right back. Note that the Optoma projector is on and enabled as a third monitor all this time. I can move the mouse around on the projector without triggering the problem. I tried pinging my router when I was playing a movie on one of the monitors, and I get a 1 millisecond response. However, when I have the movie playing on the Optoma projecter, pinging the router gives me response times in the hundreds of milliseconds, or times out completely. So, it clearly is something local to my machine - and not some sort of throttling occurring down the line. I would think that it is possibly something to do with the HDMI driver conflicting somehow with my network driver (which is a USB-based wireless connection). This one has me really stumped. Anyone have any ideas? EDIT: I am now leaning towards the possibility that the HDMI cable is somehow interfering with the wireless network, when large amounts of data are being pushed through the cable. Is this possible?

    Read the article

  • Asus M4A79XTD-EVO / AMD Phenom II X4 965 Crashes / BSOD / Hangs / Restarts

    - by Tiby
    I'll try to be as concise as possible because I have a lot to say about my problem, but I'd rather say it when asked or when I feel it's necessary, just to make this initial reading clearer. For about a year and a half I have periods when my system has all the problems in the title (I'll use the word 'crash' for either one). I'll list some patterns and what I tried to do and what were the results, but the list is not exclusive: usually it crashes when a CPU-intensive operation is in progress, like a game or video encoding or HD movie rendering, but also sometimes crashes when I'm doing nothing after a first crash the system is very unstable and sometimes it crashes even during POST, or doesn't boot at all Some months ago I went to a local service (one that you just put your computer on the table and sit there with a guy and trying to figure out the problem, very rare these days) and they used OCCT and it crashed every time he changed some part to test it out (PSU, RAM, video card, HDD). The last one was the CPU. They changed the CPU and it didn't crash any more. Then when they put my CPU back, it also didn't crash. We figured that the trouble was the thermal paste (probably some 2 years old) because it was the only thing changed while testing. Up until 2 weeks ago, I haven't had any more problems. 2 weeks ago the problems reappeared. I changed again the thermal paste, put some Arctic Silver 5, and for about a week everything worked perfect (tried some games, video encoding, no more crashes). But again it started crashing in the same fashion as the first time. But now, instead, I figured out a very odd behaviour: when I start some of the apps above, in most cases it crashes if I start OCCT and turn on the CPU test, and run any of the programs above, it doesn't crash, even if the CPU is on 100% load (and 65-70 degrees Celsius temperature) if I shut down OCCT and continue using the programs, it crashes in a very short period of time (even if the CPU is on 5-10% load and 40 degrees) There are so many patterns and temporary solutions that I figured out in this year and a half period of time, that I can't include them all because I don't know which one are more relevant, but I'll happily provide any details you ask. My system is: CPU: AMD Phenom II X4 965 (3400 MHz - 125W) MB: ASUS M4A79XTD - EVO RAM: Corsair Vengeance 8 GB (2 x 4GB) CL8 1600MHz Video: HIS Radeon HD5770 1GB PSU: Corsair 750W HDD: Western Digital 1TB OS: Win 7 Enterprise 64 BIT (also tried with Windows Server 2008 R2 Trial and Win XP)

    Read the article

  • Windows Server 2008 Software Raid 5 - Data integrity issues

    - by Fopedush
    I've got a server running Windows Server 2008 R2, with a (windows native) software raid-5 array. The array consists of 7x 1TB Western Digital RE3 and RE4 drives. I have offline backups of this array. The problem is this: I noticed a few days ago after copying a large file to the disk that there was an integrity issue with that file - it was a ~12GB file that I had downloaded via uTorrent. After moving it to the raid array, I used uTorrent to relocate the download location, and performed a re-check so I could seed it from that location. The recheck found that only 6308/6310 chunks of the copied file were intact. My next step was to write a quick powershell script that would copy files to the array, while performing a SHA1 hash of the original and resultant files and comparing them. Smaller files (100-1000MB) copied over just fine. When I started copying larger data (~15GB), I found that the hash check failed about 2/3rds of the time. The corrupt files had very, very small inconsistencies - less than .01%. I further eliminated the possibility of networking or client issues by placing this large file on the C:\ of the server, and copying it repeatedly from there to the array, seeing similar results. Copying the data via explorer, powershell, or the standard windows command prompt yield the same results. None of the copies fail or report any problems. The raid array itself is listed as healthy in disk management. After a few experiments, I shut down the server and ran memtest overnight. No errors were detected. A basic run of chkdsk found no problems, but I did not use the /R flag, as I was unsure how that might affect a software raid-5 volume. I next ran Crystal Disk Info to check the smart data on the drives - but found that CDI only detected 5 out of 7 of the disks in the array. I have no idea why. Nevertheless, CDI shows the following "caution" flags on a single one of the drives: 05 199 199 140 000000000001 Reallocated Sectors Count C5 200 200 __0 000000000001 Current Pending Sector Count Which is a little bit alarming, but I don't really know what to do with the information. I hardly feel like one reallocated sector could be causing this. At this point, I'm looking for some guidance on what to do next. I need to determine the cause of this issue, but I'm hesitant to run chkdsk /R or any bootable disk health checkers because I'm afraid they might break the array. I've considered triggering a re-sync of the array, but I'm not actually sure how to do that without doing something silly like manually dropping a disk and then restoring it. Any advice that could help me ferret out the precise cause of this issue would be greatly appreciated.

    Read the article

  • What to name 2 methods with same signatures

    - by coffeeaddict
    Initially I had a method in our DL that would take in the object it's updating like so: internal void UpdateCash(Cash Cash) { using (OurCustomDbConnection conn = CreateConnection("UpdateCash")) { conn.CommandText = @"update Cash set captureID = @captureID, ac_code = @acCode, captureDate = @captureDate, errmsg = @errorMessage, isDebit = @isDebit, SourceInfoID = @sourceInfoID, PayPalTransactionInfoID = @payPalTransactionInfoID, CreditCardTransactionInfoID = @CreditCardTransactionInfoID where id = @cashID"; conn.AddParam("@captureID", cash.CaptureID); conn.AddParam("@acCode", cash.ActionCode); conn.AddParam("@captureDate", cash.CaptureDate); conn.AddParam("@errorMessage", cash.ErrorMessage); conn.AddParam("@isDebit", cyberCash.IsDebit); conn.AddParam("@PayPalTransactionInfoID", cash.PayPalTransactionInfoID); conn.AddParam("@CreditCardTransactionInfoID", cash.CreditCardTransactionInfoID); conn.AddParam("@sourceInfoID", cash.SourceInfoID); conn.AddParam("@cashID", cash.Id); conn.ExecuteNonQuery(); } } My boss felt that creating an object every time just to update one or two fields is overkill. But I had a couple places in code using this. He recommended using just UpdateCash and sending in the ID for CAsh and field I want to update. Well the problem is I have 2 places in code using my original method. And those 2 places are updating 2 completely different fields in the Cash table. Before I was just able to get the existing Cash record and shove it into a Cash object, then update the properties I wanted to be updated in the DB, then send back the cash object to my method above. I need some advice on what to do here. I have 2 methods and they have the same signature. I'm not quite sure what to rename these because both are updating 2 completely different fields in the Cash table: internal void UpdateCash(int cashID, int paypalCaptureID) { using (OurCustomDbConnection conn = CreateConnection("UpdateCash")) { conn.CommandText = @"update Cash set CaptureID = @paypalCaptureID where id = @cashID"; conn.AddParam("@captureID", paypalCaptureID); conn.ExecuteNonQuery(); } } internal void UpdateCash(int cashID, int PayPalTransactionInfoID) { using (OurCustomDbConnection conn = CreateConnection("UpdateCash")) { conn.CommandText = @"update Cash set PaymentSourceID = @PayPalTransactionInfoID where id = @cashID"; conn.AddParam("@PayPalTransactionInfoID", PayPalTransactionInfoID); conn.ExecuteNonQuery(); } } So I thought hmm, maybe change the names to these so that they are now unique and somewhat explain what field its updating: UpdateCashOrderID UpdateCashTransactionInfoID ok but that's not really very good names. And I can't go too generic, for example: UpdateCashTransaction(int cashID, paypalTransactionID) What if we have different types of transactionIDs that the cash record holds besides just the paypalTransactionInfoID? such as the creditCardInfoID? Then what? Transaction doesn't tell me what kind. And furthermore what if you're updating 2 fields so you have 2 params next to the cashID param: UpdateCashTransaction(int cashID, paypalTransactionID, someOtherFieldIWantToUpdate) see my frustration? what's the best way to handle this is my boss doesn't like my first route?

    Read the article

  • Thinkpad speaker turns mute - Linux Codec issue?

    - by Curlew
    At some point a few days ago the speakers on my Lenovo Thinkpad T410 (Model number: 2537A11) suddenly stopped working randomly. This error happens every time I watch a video or listen to a music file. The sound just abruptly stops. At the moment, I can't produce a single sound no matter what I do. I am using Debian GNU/Linux on this laptop and there doesn't appear to be anything else wrong (the fan is working, no abnormal heat (staying around ~40°C), no other obvious errors or problems). Here is the output of a nice program someone pointed me to: martin@martin:~/Downloads$ sudo python run.py --monitor Using temporary directory: /dev/shm/hda-analyzer You may remove this directory when finished or if you like to download the most recent copy of hda-analyzer tool. Downloading file hda_analyzer.py Downloading file hda_guilib.py Downloading file hda_codec.py Downloading file hda_proc.py Downloading file hda_graph.py Downloading file hda_mixer.py Downloaded all files, executing hda_analyzer.py Watching 1 cards ====================================== Sound is working normally and then it stops and the following lines appear: Diff for codec 0/0 (0x14f15069): --- +++ @@ -164,17 +164,17 @@ Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Pincap 0x00000010: OUT Pin Default 0x901701f0: [Fixed] Speaker at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT - Power: setting=D0, actual=D0 + Power: setting=D3, actual=D3 Connection: 2 0x10* 0x11 Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE And now there is also an error in the dmesg output hda-intel: IRQ timing workaround is activated for card #0. Suggest a bigger bdl_pos_adj. I changed the bdl_pos_adj to various numbers (-1, 0, 64, 1024) and either there is no change at all or dmesg reports that the adjustment is too big. I wonder if this bdl_pos_adj is the real reason for the error. Here is my hardware information provided by alsa-info.sh website. Okay, i did some serious testing and even installed Windows and now i officially conclude that this is a hard-ware related issue with my Laptop speakers. Reason: The error occurs in my installed Debian Linux, an Ubuntu Live distribution and Windows XP No error-message appears in all of the OS. The sound just keeps running and i can't hear a thing. I tested different setups, including OSS, ALSA and the pulseaudio server on top If i use my new usb-headphones i can hear sound all the time without any sudden silences. So obviously, although hard to believe, my laptop speakers are not okay (never heard of similar cases). I'll award the bounty to anyone who can point me to good tutorials or the procedure how to exchange my T410 speakers (i still have warranty. The laptop was bought in Germany, but now i am in Denmark). Or to someone who can explain me the output from hda-analyzer (big log above).

    Read the article

  • Self updating app, wont overwrite existing app, using Android packagemanager?

    - by LokiSinclair
    I know there are plenty of questions about this on here, but I've tried everything (but the correct 'thing', obviously!) and nothing seems to shine any light on the problem I'm having. I've written an app (for a customer), which is designed to be hosted on their own server. The app references a simple text file with the latest version code in it and checks it against it's own version. If it's out of date it goes off and downloads the update. Everything is working as intended up to this point. I use the: Intent i = new Intent(Intent.ACTION_VIEW); i.setDataAndType(Uri.fromFile(outputFile), "application/vnd.android.package-archive"); i.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); startActivity(i); ...code to start the install process of the newly downloaded .apk file. And that all starts as I would expect. I click on "Install" - when I'm prompted to confirm the overwriting of the current app, with the new. It starts, and then displays: App not installed. And existing package by the same name with a conflicting signature is already installed. Now I'm aware that Android can't have multiple applications sharing the same package name, which is fine, but nothing comes up in LogCat and I can only assume that the OS is annoyed at me attempting to 'update' my app, even though I'm going through all the correct channels and using the inbuilt package manager to do it for me! Can anyone tell me what the OS is moaning about? I'm not attempting to install two apps side by side, I want it to update it, which it starts to do, and then gets really confused. Is it something to do with me using the same keystore for signing the packages? I highly doubt it as I've used the same keystores previously to handle updates to games and the like, but I just can't figure out what it's complaining about. Hopefully someone out there has had this issue and solved it, and can point me in the right direction. I'm flying a bit blind with the limited information it's giving me :( Cheers.

    Read the article

  • How to verify if the private key matches with the certificate..?

    - by surendhar_s
    I have the private key stored as .key file.. -----BEGIN RSA PRIVATE KEY----- MIICXAIBAAKBgQD5YBS6V3APdgqaWAkijIUHRK4KQ6eChSaRWaw9L/4u8o3T1s8J rUFHQhcIo5LPaQ4BrIuzHS8yzZf0m3viCTdZAiDn1ZjC2koquJ53rfDzqYxZFrId 7a4QYUCvM0gqx5nQ+lw1KoY/CDAoZN+sO7IJ4WkMg5XbgTWlSLBeBg0gMwIDAQAB AoGASKDKCKdUlLwtRFxldLF2QPKouYaQr7u1ytlSB5QFtIih89N5Avl5rJY7/SEe rdeL48LsAON8DpDAM9Zg0ykZ+/gsYI/C8b5Ch3QVgU9m50j9q8pVT04EOCYmsFi0 DBnwNBRLDESvm1p6NqKEc7zO9zjABgBvwL+loEVa1JFcp5ECQQD9/sekGTzzvKa5 SSVQOZmbwttPBjD44KRKi6LC7rQahM1PDqmCwPFgMVpRZL6dViBzYyWeWxN08Fuv p+sIwwLrAkEA+1f3VnSgIduzF9McMfZoNIkkZongcDAzjQ8sIHXwwTklkZcCqn69 qTVPmhyEDA/dJeAK3GhalcSqOFRFEC812QJAXStgQCmh2iaRYdYbAdqfJivMFqjG vgRpP48JHUhCeJfOV/mg5H2yDP8Nil3SLhSxwqHT4sq10Gd6umx2IrimEQJAFNA1 ACjKNeOOkhN+SzjfajJNHFyghEnJiw3NlqaNmEKWNNcvdlTmecObYuSnnqQVqRRD cfsGPU661c1MpslyCQJBAPqN0VXRMwfU29a3Ve0TF4Aiu1iq88aIPHsT3GKVURpO XNatMFINBW8ywN5euu8oYaeeKdrVSMW415a5+XEzEBY= -----END RSA PRIVATE KEY----- And i extracted public key from ssl certificate file.. Below is the code i tried to verify if private key matches with ssl certificate or not.. I used the modulus[i.e. private key get modulus==public key get modulus] to check if they are matching.. And this seems to hold only for RSAKEYS.. But i want to check for other keys as well.. Is there any other alternative to do the same..?? private static boolean verifySignature(File serverCertificateFile, File serverCertificateKey) { try { byte[] certificateBytes = FileUtils.readFileToByteArray(serverCertificateFile); //byte[] keyBytes = FileUtils.readFileToByteArray(serverCertificateKey); RandomAccessFile raf = new RandomAccessFile(serverCertificateKey, "r"); byte[] buf = new byte[(int) raf.length()]; raf.readFully(buf); raf.close(); PKCS8EncodedKeySpec kspec = new PKCS8EncodedKeySpec(buf); KeyFactory kf; try { kf = KeyFactory.getInstance("RSA"); RSAPrivateKey privKey = (RSAPrivateKey) kf.generatePrivate(kspec); CertificateFactory certFactory = CertificateFactory.getInstance("X.509"); InputStream in = new ByteArrayInputStream(certificateBytes); //Generate Certificate in X509 Format X509Certificate cert = (X509Certificate) certFactory.generateCertificate(in); RSAPublicKey publicKey = (RSAPublicKey) cert.getPublicKey(); in.close(); return privKey.getModulus() == publicKey.getModulus(); } catch (NoSuchAlgorithmException ex) { logger.log(Level.SEVERE, "Such algorithm is not found", ex); } catch (CertificateException ex) { logger.log(Level.SEVERE, "certificate exception", ex); } catch (InvalidKeySpecException ex) { Logger.getLogger(CertificateConversion.class.getName()).log(Level.SEVERE, null, ex); } } catch (IOException ex) { logger.log(Level.SEVERE, "Signature verification failed.. This could be because the file is in use", ex); } return false; } And the code isn't working either.. throws invalidkeyspec exception

    Read the article

  • Graphics card artifacting

    - by White Phoenix
    This is my current build: EVGA X58 (first generation) motherboard Intel i7 965 @ stock clocks 3x 2GB DDR3-1600 Corsair RAM at stock timings and voltages Corsair AX750 80 Plus Gold PSU 1 Optical Drive 1 Seagate 7200.10 500 GB drive 2x Western Digital Caviar Black 1 TB drives OCZ Vertex 1 60 GB EVGA GTX 460 Antec 1200 case HT-Omega Striker 7.1 Sound Card Windows 7 32-bit Professional (PAE Enabled) My graphics card started artifacting while I was playing a game. It artifacted, the display blinked, then I got an NVIDIA driver has crashed and recovered message. Kept going, more artifacts, another crash, but this time my display blanked out and I couldn't do anything. Restarted my computer - artifacting is in the BIOS - got to Windows 7 but it BSOD'd before I could even log in. I restarted the computer again - artifacts cleared themselves out and I managed to get to Win7, but it soon started blinking in and out and artifacting again. Checked the card temps and they're well within range. (50 idle, 70 full load) The ambient temperature here is about 80-85F with high humidity. Tried Safe Mode and it still froze up/BSOD'd. Already tried the following to fix this problem: - Reseat the graphics card and swapped in a different slot. - Removed cover on card and sprayed with compressed air to clean it out. - Swapped around memory and/or went with only using one stick at a time. - Underclocked card I called EVGA Tech Support and they said that the voltage on the 12V rail of my Corsair AX750 PSU was on the high end of the "acceptable" range (12.4V, highest within acceptable range is 12.6V - optimal is obviously 12V). They gave me an RMA number anyway, but I want to get a second opinion from you all before I send this thing off, as shipping from where I live to EVGA is kind of pricey. This PSU is only 6 months old. So that I don't have to play RMA tag, which case would be most likely? I'm strapped for cash at the moment so I want to reduce the amount of RMAs I have to do since shipping is expensive here. Is there any surefire way to test to see if it's really the graphics card or the PSU? I tried unplugging any devices that were connected to the 12V rail (except for my SSD and graphics card) as I do have 3 mechanical hard drives, but the voltage for that rail didn't drop (it remained at 12.4V). I'm fairly sure it isn't the drivers since I'm getting the artifacting at the BIOS too. Right now I'm back in Windows 7 but I don't know for how long until it messes up again. Any ideas?

    Read the article

  • Supporting multiple instances of a plugin DLL with global data

    - by Bruno De Fraine
    Context: I converted a legacy standalone engine into a plugin component for a composition tool. Technically, this means that I compiled the engine code base to a C DLL which I invoke from a .NET wrapper using P/Invoke; the wrapper implements an interface defined by the composition tool. This works quite well, but now I receive the request to load multiple instances of the engine, for different projects. Since the engine keeps the project data in a set of global variables, and since the DLL with the engine code base is loaded only once, loading multiple projects means that the project data is overwritten. I can see a number of solutions, but they all have some disadvantages: You can create multiple DLLs with the same code, which are seen as different DLLs by Windows, so their code is not shared. Probably this already works if you have multiple copies of the engine DLL with different names. However, the engine is invoked from the wrapper using DllImport attributes and I think the name of the engine DLL needs to be known when compiling the wrapper. Obviously, if I have to compile different versions of the wrapper for each project, this is quite cumbersome. The engine could run as a separate process. This means that the wrapper would launch a separate process for the engine when it loads a project, and it would use some form of IPC to communicate with this process. While this is a relatively clean solution, it requires some effort to get working, I don't now which IPC technology would be best to set-up this kind of construction. There may also be a significant overhead of the communication: the engine needs to frequently exchange arrays of floating-point numbers. The engine could be adapted to support multiple projects. This means that the global variables should be put into a project structure, and every reference to the globals should be converted to a corresponding reference that is relative to a particular project. There are about 20-30 global variables, but as you can imagine, these global variables are referenced from all over the code base, so this conversion would need to be done in some automatic manner. A related problem is that you should be able to reference the "current" project structure in all places, but passing this along as an extra argument in each and every function signature is also cumbersome. Does there exist a technique (in C) to consider the current call stack and find the nearest enclosing instance of a relevant data value there? Can the stackoverflow community give some advice on these (or other) solutions?

    Read the article

  • Alienware m15x (older model) BSOD investigation

    - by Crishu
    A frined of mine asked me to help him with an Alienware m15x laptop that had a little service history. It was bought in june 2008, serviced in january 2009 for a random fps drop problem, Alienware returned it saying nothing was wrong. The laptop still had hiccups, but after juggling a few drivers and settings, the fps drops weren't as noticeable. Eventually it died in Sept. 2009. It would not boot up locking itself on a white/gray screen. (i think it was overheating .. clocking in 100 degrees Celsius). So back to Alienware it went. They replaced the GPU and all was fine. Up until these blue screens started showing up. One other thing that was updated was the HDD and a Windows 7 reinstall, in August. From then on it seems to have started its BSOD. Could this be the culprit? Why? 0_o The original Windows was Vista but it was upgraded with a digital download/purchase of Windows 7 Home Premium and activated after installing windows. No errors on the old HDD, just on the latest installation. LE:Due note that now the old HDD is used to see if issues re-occur. So please, I am in need of someone who can interpret these windows dump files: Minidump I may have come to some conflicting conclusions. So if someone can clarify each dump/date and the probable cause/error it had; and a final conclusion or solution, we would be very grateful. Also please consult report for other system info I omitted: same link,code: XRWIVLWG If I missed something or if you have any other questions I'll be happy to answer them. Thank you. Good day. Processor: Intel(R) Core(TM)2 Duo CPU T9300 @ 2.50GHz Network Adapter Properties: Broadcom NetLink (TM) Gigabit Ethernet Intel(R) Wireless WiFi Link 4965AGN Video Adapter Properties: Driver Description NVIDIA GeForce 8800M GTX Driver Date 19.08.2009 Driver Version 8.16.11.8681 Driver Provider NVIDIA INF File oem19.inf Hardware ID PCI\VEN_10DE&DEV_060C&SUBSYS_0770152D&REV_A2 Location Information @system32\DRIVERS\pci.sys,#65536;PCI bus %1, device %2, function %3;(1,0,0) PCI Device NVIDIA GeForce 8800M GTX [NoDB] BIOS String Version 62.92.34.0.8 Installed Drivers nvd3dum (8.16.11.8681), nvwgf2um, nvwgf2um Hard Dik Drive: Model ID ST9120823ASG (**older one 120gb**) Model ID WD32000BEKT (new 320gb with fresh OS)

    Read the article

  • asp:Button is not calling server-side function

    - by Richard Neil Ilagan
    Hi guys, I know that there has been much discussion here about this topic, but none of the threads I got across helped me solve this problem. I'm hoping that mine is somewhat unique, and may actually merit a different solution. I'm instantiating an asp:Button inside a data-bound asp:GridView through template fields. Some of the buttons are supposed to call a server-side function, but for some weird reason, it doesn't. All the buttons do when you click them is fire a postback to the current page, doing nothing, effectively just reloading the page. Below is a fragment of the code: <asp:GridView ID="gv" runat="server" AutoGenerateColumns="false" CssClass="l2 submissions" ShowHeader="false"> <Columns> <asp:TemplateField> <ItemTemplate><asp:Panel ID="swatchpanel" CssClass='<%# Bind("status") %>' runat="server"></asp:Panel></ItemTemplate> <ItemStyle Width="50px" CssClass="sw" /> </asp:TemplateField> <asp:BoundField DataField="description" ReadOnly="true"> </asp:BoundField> <asp:BoundField DataField="owner" ReadOnly="true"> <ItemStyle Font-Italic="true" /> </asp:BoundField> <asp:BoundField DataField="last-modified" ReadOnly="true"> <ItemStyle Width="100px" /> </asp:BoundField> <asp:TemplateField> <ItemTemplate> <asp:Button ID="viewBtn" cssclass='<%# Bind("sid") %>' runat="server" Text="View" OnClick="viewBtnClick" /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> The viewBtn above should call the viewBtnClick() function on server-side. I do have that function defined, along with a proper signature (object,EventArgs). One thing that may be of note is that this code is actually inside an ASCX, which is loaded in another ASCX, finally loaded into an ASPX. Any help or insight into the matter will be SO appreciated. Thanks! (oh, and please don't mind my trashy HTML/CSS semantics - this is still in a very,very early stage :p)

    Read the article

  • using a Singleton to pass credentials in a multi-tenant application a code smell?

    - by Hans Gruber
    Currently working on a multi-tenant application that employs Shared DB/Shared Schema approach. IOW, we enforce tenant data segregation by defining a TenantID column on all tables. By convention, all SQL reads/writes must include a Where TenantID = '?' clause. Not an ideal solution, but hindsight is 20/20. Anyway, since virtually every page/workflow in our app must display tenant specific data, I made the (poor) decision at the project's outset to employ a Singleton to encapsulate the current user credentials (i.e. TenantID and UserID). My thinking at the time was that I didn't want to add a TenantID parameter to each and every method signature in my Data layer. Here's what the basic pseudo-code looks like: public class UserIdentity { public UserIdentity(int tenantID, int userID) { TenantID = tenantID; UserID = userID; } public int TenantID { get; private set; } public int UserID { get; private set; } } public class AuthenticationModule : IHttpModule { public void Init(HttpApplication context) { context.AuthenticateRequest += new EventHandler(context_AuthenticateRequest); } private void context_AuthenticateRequest(object sender, EventArgs e) { var userIdentity = _authenticationService.AuthenticateUser(sender); if (userIdentity == null) { //authentication failed, so redirect to login page, etc } else { //put the userIdentity into the HttpContext object so that //its only valid for the lifetime of a single request HttpContext.Current.Items["UserIdentity"] = userIdentity; } } } public static class CurrentUser { public static UserIdentity Instance { get { return HttpContext.Current.Items["UserIdentity"]; } } } public class WidgetRepository: IWidgetRepository{ public IEnumerable<Widget> ListWidgets(){ var tenantId = CurrentUser.Instance.TenantID; //call sproc with tenantId parameter } } As you can see, there are several code smells here. This is a singleton, so it's already not unit test friendly. On top of that you have a very tight-coupling between CurrentUser and the HttpContext object. By extension, this also means that I have a reference to System.Web in my Data layer (shudder). I want to pay down some technical debt this sprint by getting rid of this singleton for the reasons mentioned above. I have a few thoughts on what an better implementation might be, but if anyone has any guidance or lessons learned they could share, I would be much obliged.

    Read the article

  • New harddrives failing within weeks.

    - by Jason Kealey
    I've experienced 8 hard disk failures in 3 months and have tried many things to solve the issue permanently but I have failed. I would like to know if you have any advice for me. System was running Win XP on an Asus P5W-DH Deluxe. I have setup a RAID-1 array. I started out with 2 x 500 GB 7200RPM Western Digital drives. One died. I took it out to RMA it. On the same day, the router was fried. Assumed a power surge occurred; connected an older UPS to protect the system. Once I got my hands on an identical disk, I installed it. The RAID array was rebuilt. A few days later, the other one died. Assumed the rebuild caused it to fail. Took it out for RMA. Before the other one arrived, the remaining one died. I then discovered I could re-enable them using the Intel Matrix Storage Manager. I re-enabled both and the system seemed fine for a week, until both died again. I got two new 1.5 TB 7200RPM Seagate drives and re-installed Windows 7. Also replaced the UPS and power supply. They both died again. The voltage on the plug is stable between 120 and 122V as per the UPS. None of the other devices have had any problems (monitors, etc.). At this point, I see two options: a) electrical issue in the house that was, for some reason, not blocked by the UPS. b) something else inside the system causing surges? motherboard? onboard raid controller? Failures happen fairly quickly, between 2 and 14 days after I fix the previous issue. I just gotten a new computer (Core i7) to replace it. If it is stable, I can determine that b) was the problem. If it fries its hard drive again, I can determine that it is an electrical issue in the house. Do you have any other thoughts? Any tools I can run on the drives that failed to get more information about the original SMART event history?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >