Search Results

Search found 8935 results on 358 pages for 'mad vs'.

Page 330/358 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • WHY JSLint complains: "someFunction() was used before it was defined"?

    - by 7hi4g0
    Searching for the JSLint error "was used before it was defined" i've found these: JSLint: Using a function before it's defined error Function was used before it was defined - JSLint JSLint: was used before it was defined jsLint error: “somefunction() was used before it was defined” jslint - Should we tolerate misordered definitions? Problem None of those answers WHY the error is shown. Elaboration According to the ECMA-262 Specification functions are evaluated before execution starts, hence all functions declared using the function keyword are available to all the code idenpendent of the place they were declared (assuming they are acessible on that scope). This is otherwise known as hoisting. Douglas Crockford seems to think it is better to declare every function before the code that uses it regardless of the hoisting effect. According to StackOverflowNewbie in his question, this raises some code organization problems. Not to mention some people, like me, prefer to declare their functions underneath the main/init code. On those questions there are some ways to avoid or fix the error, such as using function expressions vs function declarations. But none of them showed me the reason of the error. Not even Crockford's site. Question(s) Why is it an error to call a function before the declaration, even if it was declared using the function keyword? Is it better to use function expressions instead of function declaration in the JSLint context? If one is preferred, why? Note Not looking for answers like: Crockford is a tyrant Is just Crockford's opinion Thank you :*

    Read the article

  • Efficiently select top row for each category in the set

    - by VladV
    I need to select a top row for each category from a known set (somewhat similar to this question). The problem is, how to make this query efficient on the large number of rows. For example, let's create a table that stores temperature recording in several places. CREATE TABLE #t ( placeId int, ts datetime, temp int, PRIMARY KEY (ts, placeId) ) -- insert some sample data SET NOCOUNT ON DECLARE @n int, @ts datetime SELECT @n = 1000, @ts = '2000-01-01' WHILE (@n>0) BEGIN INSERT INTO #t VALUES (@n % 10, @ts, @n % 37) IF (@n % 10 = 0) SET @ts = DATEADD(hour, 1, @ts) SET @n = @n - 1 END Now I need to get the latest recording for each of the places 1, 2, 3. This way is efficient, but doesn't scale well (and looks dirty). SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 1 ORDER BY ts DESC ) t1 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 2 ORDER BY ts DESC ) t2 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 3 ORDER BY ts DESC ) t3 The following looks better but works much less efficiently (30% vs 70% according to the optimizer). SELECT placeId, ts, temp FROM ( SELECT placeId, ts, temp, ROW_NUMBER() OVER (PARTITION BY placeId ORDER BY ts DESC) rownum FROM #t WHERE placeId IN (1, 2, 3) ) t WHERE rownum = 1 The problem is, during the latter query execution plan a clustered index scan is performed on #t and 300 rows are retrieved, sorted, numbered, and then filtered, leaving only 3 rows. For the former query three times one row is fetched. Is there a way to perform the query efficiently without lots of unions?

    Read the article

  • Does running IIS7 in classic mode affect MVC output caching?

    - by Bob
    I have a need to run an application in classic mode for backwards compatibility with a specific application, and am trying to understand what kind of impact that will have on the performance of an MVC application that is running on the site. If we put a few static file maps (for .js, .css, .png, etc) above the ASP.NET wildcard map to reduce the amount of processing by the ASP.NET handler, will we be approaching the integrated mode in terms of performance? The thing i'm primarily concerned with is any effect this might have on output caching. I understand that integrated mode might (?) allow for the output cache to handle non ASP.NET content, but that isn't really a concern. We're more interested in ensuring that the MVC application has full use of the output cache. Empirically i've found that the two configurations operate on par when things go well, but if the page references resources that are not available, the integrated mode tends to fail much more quickly than the classic mode (e.g. 500 ms vs 10 seconds), reducing 'hang time' on the page load. Thanks for any feedback.

    Read the article

  • Crash using WscRegisterForChanges.

    - by user335126
    I'm trying to use the WscRegisterForChanges with C++ function in Windows 7. Documentation located here: http://msdn.microsoft.com/en-us/library/bb432507(v=VS.85).aspx My problem is that even though the callback properly executes, the code crashes when it gets to the end of the callback's execution. Here's the code in question. It's very simple, so I'm not sure why it's crashing: #include #include #include void SecurityCenterChangeOccurred(void *param) { printf("Change occurred!\n"); } int main() { HRESULT result = S_OK; HANDLE callbackRegistration = NULL; result = WscRegisterForChanges( NULL, &callbackRegistration, (LPTHREAD_START_ROUTINE)SecurityCenterChangeOccurred, NULL); while(1) { Sleep(100); } return 0; } My call stack looks like this when the crash occurs: 00faf6e8() ntdll.dll!_TppWorkerThread@4() + 0x1293 bytes kernel32.dll!@BaseThreadInitThunk@12() + 0x12 bytes ntdll.dll!___RtlUserThreadStart@8() + 0x27 bytes ntdll.dll!__RtlUserThreadStart@8() + 0x1b bytes If I add ExitThread(0); to the end of SecurityCenterChangeOccurred, I get an error and the following trace (So I don't think I should be using ExitThread): Unhandled exception at 0x7799852b (ntdll.dll) in WscRegisterForChangesCrash.exe: 0xC000071C: An invalid thread, handle %p, is specified for this operation. Possibly, a threadpool worker thread was specified. ntdll.dll!_TpCheckTerminateWorker@4() + 0x3ca2f bytes ntdll.dll!_RtlExitUserThread@4() + 0x30 bytes WscRegisterForChangesCrash.exe!SecurityCenterChangeOccurred(void * param=0x00000000) Line 8 + 0xa bytes C++ wscapi.dll!WorkItemWrapper() + 0x19 bytes ntdll.dll!_RtlpTpWorkCallback@8() + 0xdf bytes ntdll.dll!_TppWorkerThread@4() + 0x1293 bytes kernel32.dll!@BaseThreadInitThunk@12() + 0x12 bytes ntdll.dll!___RtlUserThreadStart@8() + 0x27 bytes ntdll.dll!__RtlUserThreadStart@8() + 0x1b bytes Does anyone have any ideas why this might be happening? To trigger the crash run the program and turn the firewall on or off.

    Read the article

  • Reading the .vcproj file with C#

    - by Dulantha Fernando
    We create the vcproj file with the makefileproj keyword so we can use our own build in VS. My question is, using C#, how do you read the "C++" from the following vcproj file: <?xml version="1.0" encoding="Windows-1252"?> <VisualStudioProject ProjectType="Visual C++" Version="8.00" Name="TestCSProj" ProjectGUID="{840501C9-6AFE-8CD6-1146-84208624C0B0}" RootNamespace="TestCSProj" Keyword="MakeFileProj" > <Platforms> <Platform Name="x64" /> <Platform Name="C++" ===> I need to read "C++" /> </Platforms> I used XmlNode and got upto the second Platform: String path = "C:\\blah\\TestCSProj\\TestCSProj\\TestCSProj.vcproj"; FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite); XmlDocument xmldoc = new XmlDocument(); xmldoc.Load(fs); XmlNodeList oldFiles = xmldoc.GetElementsByTagName("Platform"); XmlAttribute name = oldFiles[1].Attributes[0]; Console.WriteLine(name.Name); This will print Name, but I need "C++". How do I read that? Thank you very much in advance

    Read the article

  • C++: Copy contructor: Use Getters or access member vars directly?

    - by cbrulak
    Have a simple container class: public Container { public: Container() {} Container(const Container& cont) //option 1 { SetMyString(cont.GetMyString()); } //OR Container(const Container& cont) //option 2 { m_str1 = cont.m_str1; } public string GetMyString() { return m_str1;} public void SetMyString(string str) { m_str1 = str;} private: string m_str1; } So, would you recommend this method or accessing the member variables directly? In the example, all code is inline, but in our real code there is no inline code. Update (29 Sept 09): Some of these answers are well written however they seem to get missing the point of this question: this is simple contrived example to discuss using getters/setters vs variables initializer lists or private validator functions are not really part of this question. I'm wondering if either design will make the code easier to maintain and expand. Some ppl are focusing on the string in this example however it is just an example, imagine it is a different object instead. I'm not concerned about performance. we're not programming on the PDP-11

    Read the article

  • Is it okay for multiple objects to retain the same object in Objective-C/Cocoa?

    - by Andrew Arrow
    Say I have a tableview class that lists 100 Foo objects. It has: @property (nonatomic, retain) NSMutableArray* fooList; and I fill it up with Foos like: self.fooList = [NSMutableArray array]; while (something) { Foo* foo = [[Foo alloc] init]; [fooList addObject:foo]; [foo release]; } First question: because the NSMutableArray is marked as retain, that means all the objects inside it are retained too? Am I correctly adding the foo and releasing the local copy after it's been added to the array? Or am I missing a retain call? Then if the user selects one specific row in the table and I want to display a detail Foo view I call: FooView* localView = [[FooView alloc] initWithFoo:[self.fooList objectAtIndex:indexPath.row]]; [self.navigationController pushViewController:localView animated:YES]; [localView release]; Now the FooView class has: @property (nonatomic, retain) Foo* theFoo; so now BOTH the array is holding on to that Foo as well as the FooView. But that seems okay right? When the user hits the back button dealloc will be called on FooView and [theFoo release] will be called. Then another back button is hit and dealloc is called on the tableview class and [fooList release] is called. You might argue that the FooView class should have: @property (nonatomic, assign) Foo* theFoo; vs. retain. But sometimes the FooView class is called with a Foo that's not also in an array. So I wanted to make sure it was okay to have two objects holding on to the same other object.

    Read the article

  • Why does this program require MSVCR80.dll and what's the best solution for this kinda problem?

    - by Runner
    #include <gtk/gtk.h> int main( int argc, char *argv[] ) { GtkWidget *window; gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_widget_show (window); gtk_main (); return 0; } I tried putting various versions of MSVCR80.dll under the same directory as the generated executable(via cmake),but none matched. Is there a general solution for this kinda problem? UPDATE Some answers recommend install the VS redist,but I'm not sure whether or not it will affect my installed Visual Studio 9, can someone confirm? Manifest file of the executable <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> It seems the manifest file says it should use the MSVCR90, why it always reporting missing MSVCR80.dll? FOUND After spending several hours on it,finally I found it's caused by this setting in PATH: D:\MATLAB\R2007b\bin\win32 After removing it all works fine.But why can that setting affect my running executable from using msvcr90 to msvcr80 ???

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • Parameter meaning of CBasePin::GetMediaType(int iPosition, ...) method

    - by user325320
    Thanks to everyone who views my question. http://msdn.microsoft.com/en-us/library/windows/desktop/dd368709(v=vs.85).aspx It is not very clear from the documentation regarding the iPosition parameter for virtual HRESULT GetMediaType( int iPosition, CMediaType *pMediaType ); It is said "Zero-based index value.", but what kind of index it is? the index of the samples? I have a source filter sending the H.264 NALU flows (MEDIASUBTYPE_AVC1) and it works very well except that the SPS/PPS may be changed after the video is played for a while. The SPS and PPS are appended to the MPEG2VIDEOINFO structure, which is passed in CMediaType::SetFormat method when GetMediaType method is called. and there is another version of GetMediaType which accepts the iPosition parameter. It seems I can use this method to update the SPS / PPS. My question is: What does the iPosition param mean, and how does Decoder Filter know which SPS/PPS are assigned for each NALU sample. HRESULT GetMediaType(int iPosition, CMediaType *pMediaType) { ATLTRACE( "\nGetMediaType( iPosition = %d ) ", iPosition); CheckPointer(pMediaType,E_POINTER); CAutoLock lock(m_pFilter->pStateLock()); if (iPosition < 0) { return E_INVALIDARG; } if (iPosition == 0) { pMediaType->InitMediaType(); pMediaType->SetType(&MEDIATYPE_Video); pMediaType->SetFormatType(&FORMAT_MPEG2Video); pMediaType->SetSubtype(&MEDIASUBTYPE_AVC1); pMediaType->SetVariableSize(); } int nCurrentSampleID; DWORD dwSize = m_pFlvFile->GetVideoFormatBufferSize(nCurrentSampleID); LPBYTE pBuffer = pMediaType->ReallocFormatBuffer(dwSize); memcpy( pBuffer, m_pFlvFile->GetVideoFormatBuffer(nCurrentSampleID), dwSize); pMediaType->SetFormat(pBuffer, dwSize); return S_OK; }

    Read the article

  • Why does VS2005 skip execution of lines when debugging managed C++ without optimizations?

    - by Sakin
    I ran into a rather odd behavior that I don't even know how to start describing. I wrote a piece of managed C++ code that makes calls to native methods. A (very) simplified version of the code would look like this (I know it looks like a full native function, just assume there is managed stuff being done all over the place): int somefunction(ptrHolder x) { // the accessptr method returns a native pointer if (x.accessptr() != nullptr) // I tried this with nullptr, NULL, 0) { try { x->doSomeNativeVeryImportantStuff(); // or whatever, doesn't matter } catch (SomeCustomExceptionClass &) { return 0; } } SomeOtherNativeClass::doStaticMagic(); return 1; } I compiled this code without optimizations using the /clr flag (VS.NET 2005, SP2) and when running it in the debugger I get to the if statement, since the pointer is actually null, I don't enter the if, but surprisingly, the cursor jumps directly to the return 1 statement, ignoring the doStaticMagic() method completely!!! When looking at the assembly code, I see that it really jumps directly to that line. If I force the debugger to enter the if block, I also jump to the return 1 statement after I press F10. Any ideas why this is happening? Thanks, Ariel

    Read the article

  • File/Property rename problem in Visual Studio and Explorer

    - by user211377
    I am running Windows 7. In Visual Studio, if I try to rename a file by right-click/rename, it behaves as normal for a couple of seconds, then switches out of edit mode. A similar problem occurs when I try to change a property, for example the name of a control. When I click in the property value, I can start editing, but then it assumes the edit is complete, and if I continue typing it overwrites the text. It does this every couple of seconds, so, for example, if I want to name a control mnuFile, I might get mn, then uFi, then le. S, the control ends upgetting called whatever I typed in the last 2-3 characters. I have the same problem with file rename in Explorer. Looks to me as though some timeout is kicking in and terminating the edit. Well, I was going to try a 'Repair install', but that's not an option in Windows 7! So, I went through the re-install, up to the point where I thought is was going to trash my install, and then cancelled it! By some miracle, that has fixed the problem!#Thanks for the advice about ShellExView, I'll try that next time it happens. Thanks for the answers guys! In my view it is more a Visual Studio issue, since it affects both file renames and properties in VS. In Explorer it only affects file rename, which is (just slightly) less annoying!

    Read the article

  • ctypes DLL with optional dependencies

    - by pisswillis
    Disclaimer: I'm new to windows programming so some of my assumptions may be wrong. Please correct me if so. I am developing a python wrapper for a C API using ctypes. The API ships with both 64 and 32 DLLs/LIBs. I can succesfully load the DLL using ctypes.WinDLL('TheLibName') and call functions etc etc. However some functions were not doing what they should. Upon further investigation it appears that the 32bit DLL is being used, which is what is causing the unexpected behaviour. I have tried using ctypes.WinDLL('TheLibName64') but the module is not found. I have tried registering the DLL with regsrv32, but it reports there is no entry point (it also reports no entry point when I try and register TheLibName, which is found by WinDLL(). The DLL came with a sample project in Visual Studio (I have 0 experience with VS so again please correct me here) which builds both 32 and 64 bit versions of the sample project. In the .vcsproj file the configurations for the 64 bit version include: AdditionalDependencies="TheLibName64.lib" in the VCLinkerTool section. In windows/system32 there are both TheLibName.dll/.lib, and TheLibName64.dll/.lib. So it seems to me that my problem is now to make the python ctypes DLL loader load these optional dependencies when the DLL is loaded. However I can't find any information on this (perhaps because, as a doze noob, I do not know the correct terminology) in the ctypes documentation. Is there a way to do this in ctypes? Am I going about this in completely the wrong way? Any help or general information about optional DLL dependencies and how they are loaded in windows would be much appreciated. Thanks

    Read the article

  • C++ struct, public data members and inheritance

    - by Marius
    Is it ok to have public data members in a C++ class/struct in certain particular situations? How would that go along with inheritance? I've read opinions on the matter, some stated already here http://stackoverflow.com/questions/952907/practices-on-when-to-implement-accessors-on-private-member-variables-rather-than http://stackoverflow.com/questions/670958/accessors-vs-public-members or in books/articles (Stroustrup, Meyers) but I'm still a little bit in the shade. I have some configuration blocks that I read from a file (integers, bools, floats) and I need to place them into a structure for later use. I don't want to expose these externally just use them inside another class (I actually do want to pass these config parameters to another class but don't want to expose them through a public API). The fact is that I have many such config parameters (15 or so) and writing getters and setters seems an unnecessary overhead. Also I have more than one configuration block and these are sharing some of the parameters. Making a struct with all the data members public and then subclassing does not feel right. What's the best way to tackle that situation? Does making a big struct to cover all parameters provide an acceptable compromise (I would have to leave some of these set to their default values for blocks that do not use them)?

    Read the article

  • Consuming a HTTPS Web Service in .Net 3.5 Web Project

    - by Chris M
    I'm trying to consume a webservice that ONLY runs on HTTPS but using the "add service" method in VS or using the WSDL to generate a code file leaves me with a web service that states its http... <wsdl:service name="OGServ"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/">XML Web Services element of OGServ Gateway</wsdl:documentation> <wsdl:port name="OGServSoap" binding="tns:OGServSoap"> <soap:address location="http://ogserv.domain.co.uk/ogwsrv/og.asmx" /> </wsdl:port> <wsdl:port name="OGServSoap12" binding="tns:OGServSoap12"> <soap12:address location="http://ogserv.domain.co.uk/ogwsrv/og.asmx" /> </wsdl:port> </wsdl:service> Would this be the reason that even when I change the app.config (generated by the add-service) endpoint address to https it says it was expecting HTTP? The error: EC.Tests.OGGatewayLayerTest (TestFixtureSetUp): System.ArgumentException : The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via

    Read the article

  • What's the state of PHP unit testing frameworks in 2010?

    - by Pekka
    As far as I can see, PHPUnit is the only serious product in the field at the moment. It is widely used, is integrated into Continuous Integration suites like phpUnderControl, and well regarded. The thing is, I don't really like working with PHPUnit. I find it hard to set up (PEAR is the only officially supported installation method, and I hate PEAR), sometimes complicated to work with and, correct me if I'm wrong, lacking executability from a web page context (i.e. no CLI, which would really be nice when developing a web app.) The only competition to I can see is Simpletest, which looks very nice but hasn't seen a new release for almost two years, which tends to rule it out for me - Unit Testing is quite a static field, true, but as I will be deploying those tests alongside web applications, I would like to see active development on the project, at least for security updates and such. There is a SO question that pretty much confirms what I'm saying: Simple test vs PHPunit Seeing that that is almost two years old as well, though, I think it's time to ask again: Does anybody know any other serious feature-complete unit testing frameworks? Am I wrong in my criticism of PHPUnit? Is there still development going on for SimpleTest?

    Read the article

  • File IO with Streams - Best Memory Buffer Size

    - by AJ
    I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance).

    Read the article

  • Preserving timestamps on Clojure .clj files when building shaded jar via Maven Shade Plugin

    - by Dereference
    When using the maven-shade-plugin to package our jar artifact that contained a few Clojure libs and some Java. We were using AOT compilation for our Clojure code. When we loaded the jar, it was having very slow load times. AOT compilation is supposed to help this quite a bit, but that wasn't what we were seeing. We noticed in java jar -verbose output that there was a lot of JVM__DEFINE_CLASS calls happening when Clojure classes were being loaded. This didn't make sense, since more of our Clojure code was AOT compiled to .class files. Turns out the maven-shade-plugin creates all new files, with new timestamps in the final artifact Clojure uses the timestamp information on a .clj file vs. a .class file, to determine if the file needs to be recompiled. The maven-shade-plugin was causing the .clj file and it's associated .class file to have the same timestamp, so Clojure always chose to dynamically recompile the source. The only workaround that we have been able to figure out, at this point, is to write a script that would re-open the shaded jar and bump the .clj file timestamps back to some time in the past, so that they wouldn't be equal to the timestamps of their associated .class files. Does anyone know of a better approach?

    Read the article

  • Is it possible to wrap an asynchronous event and its callback in a function that returns a boolean?

    - by Rob Flaherty
    I'm trying to write a simple test that creates an image element, checks the image attributes, and then returns true/false. The problem is that using the onload event makes the test asynchronous. On it own this isn't a problem (using a callback as I've done in the code below is easy), but what I can't figure out is how to encapsulate this into a single function that returns a boolean. I've tried various combinations of closures, recursion, and self-executing functions but have had no luck. So my question: am I being dense and overlooking something simple, or is this in fact not possible, because, no matter what, I'm still trying to wrap an asynchronous function in synchronous expectations? Here's the code: var supportsImage = function(callback) { var img = new Image(); img.onload = function() { //Check attributes and pass true or false to callback callback(true); }; img.src = 'data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs='; }; supportsImage(function(status){ console.log(status); }); To be clear, what I want is to be able to wrap this in something such that it can be used like: if (supportsImage) { //Do some crazy stuff } Thanks! (Btw, I know there are a ton of SO questions regarding confusion about synchronous vs. asynchronous. Apologies if this can be reduced to something previously answered.)

    Read the article

  • Short file names versus long file names in Windows

    - by normski
    I have some code which gets the short name from a file path, using GetShortNameW(), and then later retrieves the long name view GetLongNameA(). The original file is of the form "C:/ProgramData/My Folder/File.ext" However, following conversion to short, then back to long, the filename becomes "C:/Program Files/My Folder/Filename.ext". The short name is of the form "C:/PROGRA~2/MY_FOL~1/FIL~1.EXT" The short name is being incorrectly resolved. The code compiles using VS 2005 on Windows 7 (I cannot upgrade the project to VS2008) Does anybody have any idea why this might be happening? DWORD pathLengthNeeded = ::GetShortPathNameW(aRef->GetFilePath().c_str(), NULL, 0); if(pathLengthNeeded != 0) { WCHAR* shortPath = new WCHAR[pathLengthNeeded]; DWORD newPathNameLength = ::GetShortPathNameW(aRef->GetFilePath().c_str(), shortPath, pathLengthNeeded); if(newPathNameLength != 0) { UI_STRING unicodePath(shortPath); std::string asciiPath = StringFromUserString(unicodePath); pathLengthNeeded = ::GetLongPathNameA(asciiPath.c_str(),NULL, 0); if(pathLengthNeeded != 0) {// convert it back to a long path if possible. For goodness sake can't we use Unicode throughout?F char* longPath = new char[pathLengthNeeded]; DWORD newPathNameLength = ::GetLongPathNameA(asciiPath.c_str(), longPath, pathLengthNeeded); if(newPathNameLength != 0) { std::string longPathString(longPath, newPathNameLength); asciiPath = longPathString; } delete [] longPath; } SetFullPathName(asciiPath); } delete [] shortPath; }

    Read the article

  • Why is this unordered list formatting differently in IE7?

    - by Joel
    I'm better about getting things to look good in IE8, FF, and Safari, but IE7 still throws curve balls at me... Please check out this page and scroll down below the nav bar: http://rattletree.com/instruments.php It should become obvious when viewing in FF vs IE7. For some reason the formatting of the list is pushing the list items down on the page... any tips? <ul class="instrument"> <li class="imagebox"><img src="/images/stuff.jpg" width="247" height="228" alt="Matepe" /></li> <li class="textbox"><h3>Matepe</h3><p>This text should be to the right of the image but drops below the image in IE7</p></li> </ul> css: ul.instrument { text-align:left; display:inline-block; } ul.instrument li { list-style-type: none; display:inline-block; } li.imagebox { display:inline; margin:20px 0; padding:0px; vertical-align:top; } li.imagebox img{ border: solid black 1px; } li.textbox { display:inline; } li.textbox p{ margin:10px; width:340px; display:inline-block; }

    Read the article

  • Compile C++ in Visual Studio

    - by Kasun
    Hi All.. I use this method to compile C++ file in VS. But even i provide the correct file it returns false. Can any one help me... This is class called CL class CL { private const string clexe = @"cl.exe"; private const string exe = "Test.exe", file = "test.cpp"; private string args; public CL(String[] args) { this.args = String.Join(" ", args); this.args += (args.Length > 0 ? " " : "") + "/Fe" + exe + " " + file; } public Boolean Compile(String content, ref string errors) { if (File.Exists(exe)) File.Delete(exe); if (File.Exists(file)) File.Delete(file); File.WriteAllText(file, content); Process proc = new Process(); proc.StartInfo.UseShellExecute = false; proc.StartInfo.RedirectStandardOutput = true; proc.StartInfo.RedirectStandardError = true; proc.StartInfo.FileName = clexe; proc.StartInfo.Arguments = this.args; proc.StartInfo.CreateNoWindow = true; proc.Start(); //errors += proc.StandardError.ReadToEnd(); errors += proc.StandardOutput.ReadToEnd(); proc.WaitForExit(); bool success = File.Exists(exe); return success; } } This is my button click event private void button1_Click(object sender, EventArgs e) { string content = "#include <stdio.h>\nmain(){\nprintf(\"Hello world\");\n}\n"; string errors = ""; CL k = new CL(new string[] { }); if (k.Compile(content, ref errors)) Console.WriteLine("Success!"); else MessageBox.Show("Errors are : ", errors); }

    Read the article

  • Boost program will not working on Linux

    - by Martin Lauridsen
    Hi SOF, I have this program which uses Boost::Asio for sockets. I pretty much altered some code from the Boost examples. The program compiles and runs just like it should on Windows in VS. However, when I compile the program on Linux and run it, I get a Segmentation fault. I posted the code here The command I use to compile it is this: c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include mpqs.cpp mpqs_polynomial.cpp mpqs_host.cpp -o mpqs_host -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -lboost_thread -static -lpthread By commenting out code, I have found out that I get the Segmentation fault due to the following line: boost::asio::io_service io_service; Can anyone provide any assistance, as to what may be the problem (and the solution)? Thanks! Edit: I tried changing the program to a minimal example, using no other libraries or headers, just boost/asio.hpp: #define DEBUG 0 #include <boost/asio.hpp> int main(int argc, char* argv[]) { boost::asio::io_service io_service; return 0; } I also removed other library inclusions and linking on compilation, however this minimal example still gives me a segmentation fault.

    Read the article

  • Entity framework with Linq to Entities performance

    - by mare
    If I have a static method like this public static string GetTicClassificationTitle(string classI, string classII, string classIII) { using (TicDatabaseEntities ticdb = new TicDatabaseEntities()) { var result = from classes in ticdb.Classifications where classes.ClassI == classI where classes.ClassII == classII where classes.ClassIII == classIII select classes.Description; return result.FirstOrDefault(); } } and use this method in various places in foreach loops or just plain calling it numerous times, does it create and open new connection every time? If so, how can I tackle this? Should I cache the results somewhere, like in this case, I would cache the entire Classifications table in Memory Cache? And then do queries vs this cached object? Or should I make TicDatabaseEntities variable static and initialize it at class level? Should my class be static if it contains only static methods? Because right now it is not.. Also I've noticed that if I return result.First() instead of FirstOrDefault() and the query does not find a match, it will issue an exception (with FirstOrDefault() there is no exception, it returns null). Thank you for clarification.

    Read the article

  • .NET Application with SQL Server CE Database

    - by blu
    I just started using SQL Server CE 3.5 in my WinForms Application (C# in VS 2008 SP1). I've noticed a couple of interesting things I'd like some input on: 1. Copying of sdf file to bin My sdf file is located inside of an Infrastructure project that houses my repository implementations. When the application is first debugged the sdf was copied to debug\bin. This is where all future reads/writes operate. At some point when this is deployed the file will go into a data folder using Click Once, but during development where should I be putting this sdf? Is having it in the bin typical, or are there any other recommendations? 2. Updating sdf It appears that writing to the sdf file does not immediately update the database. I am using Linq-to-SQL and am calling SubmitChanges, but on read the values are not returned. However if I close the application and re-open it the added value is there. Is there an additional flush step I need to take? What is causing this, file locking, buffering, something else? Update 3. Unit Tests I have an MS test project, and the sdf file is not being copied to the correct output directory. I have the settings: Build Action: Content Copy to Output Directory: Copy Always The message is: System.Data.SqlServerCe.SqlCeException: The database file cannot be found. Check the path to the database. I appreciate any guidance on these questions, thanks. If there is a tutorial other than what is on MSDN that you know about that would be great too. Working with CE is proving to be a difficult task and I welcome any help I can find.

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >