Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 478/903 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • is appassembler plugin broken for java service wrapper on windows 64bit?

    - by Paul McKenzie
    Hi I'm developing on 32bit windows and am using appassembler to create a java service wrapper assembly, and it works ok. But I need to also create a 64bit assembly for deployment to a dev server. In the following config I have substituted the 32bit platform with the 64bit, see the <includes> section. But it no longer places the wrapper jar and dll in the lib folder. If I omit the includes completely, I get linux, solaris, Mac OSX and Win32 libraries, but no win64. Anyone got this working? <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>appassembler-maven-plugin</artifactId> <version>1.1-SNAPSHOT</version> <configuration> <target>${project.build.directory}/appassembler</target> <repositoryLayout>flat</repositoryLayout> <defaultJvmSettings> <initialMemorySize>256M</initialMemorySize> <maxMemorySize>1024M</maxMemorySize> </defaultJvmSettings> <daemons> <daemon> <id>MyApp</id> <mainClass>com.foo.AppMain</mainClass> <platforms> <platform>jsw</platform> </platforms> <generatorConfigurations> <generatorConfiguration> <generator>jsw</generator> <includes> <include>windows-x86-64</include> </includes> <configuration> <property> <name>set.default.REPO_DIR</name> <value>../../repo</value> </property> </configuration> </generatorConfiguration> </generatorConfigurations> </daemon> </daemons> </configuration> <executions> <execution> <goals> <goal>generate-daemons</goal> <goal>create-repository</goal> </goals> </execution> </executions> </plugin>

    Read the article

  • Activate a (COM Interop based) ActiveX contol using registration free com

    - by embnut
    I have a (COM Interop based) ActiveX contol that I am trying to use with registration free com. When the control loads the control is inactive (does not responds to events, control not fully rendered etc). After much search I discovered that COM objects using reg-free-com use the miscStatus attribute to set the initial state to get correctly activated. I know how to use it with a comClass which corresponds to a native COM Object. 1) What is the equivalent of the following for clrClass element which corresponds to a COM-interop object? <comClass clsid="{qqqqqqqq-wwww-eeee-rrrr-00C0F0283628}" tlbid="{xxxxxxxx-yyyy-zzzz-aaaa-0000F8754DA1}" threadingModel="Both" progid="SomeCompany.SomeOleControl" description="Some ActiveX Control" miscStatus="recomposeonresize,insideout,activatewhenvisible,nouiactivate" > 2) The COM client I am using is Visual FoxPro. If the (1) is not possible what can I do in VFP to activate the inactive ActiveX control. (I dont mind VB or C# input too if I can use it to find the equivalent foxpro) Currently I tried the following this.AddObject('OleControl1', 'oleControl', 'SomeCompany.SomeOleControl') this.OleControl1.AutoActivate = 3 this.OleControl1.Visible = .T. this.OleControl1.SetFocus But I the OleControl1 gets focus before passing events like mouse click to its subelements. So I have to click twice on it to do the necessary action, any time it does not have focus. I would like the control to act as if the "nouiactivate" of the miscStatus value is set. 3) Is there any other way of accomplishing what I want to do? Hans Passant, here is the listing of current Assembly.dll.manifest. The formatting in the comment made it unreadable. <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity name="Assembly" version="1.0.0.0" type="win32" publicKeyToken="wwwwwwwwwwwwwwww"/> <clrClass name="SomeCompany.SomeOleControl" clsid="{qqqqqqqq-wwww-eeee-rrrr-00C0F0283628}" progid="SomeCompany.SomeOleControl" threadingModel="Both"/> <file name="Assembly.tlb"> <typelib tlbid="{xxxxxxxx-yyyy-zzzz-aaaa-0000F8754DA1}" version="1.0" helpdir="" flags="hasdiskimage"/> </file> </assembly>

    Read the article

  • Why am I seeing a crash when trying to call CDHtmlDialog::OnInitDialog()

    - by Tim
    I added a helpAbout menu item to my mfc app. I decided to make the ddlg derive from CDHTMLDialog. I override the OnInitDialog() method in my derived class and the first thing I do is call the parent's OnInitDialog() method. I then put in code that sets the title. On some machines this works fine, but on others it crashes in the call to CDHtmlDialog::OnInitDialog() - Trying to read a null pointer. the call stack has nothing useful - it is in mfc90.dll Is this a potential problem with mismatches of mfc/win32 dlls? It works on my vista machines but crashes on a win2003 server box. BOOL HTMLAboutDlg::OnInitDialog() { // CRASHES on the following line CDHtmlDialog::OnInitDialog(); CString title = "my title"; // example of setting title // i try to get version info //set the title CModuleVersion ver; char filename[ _MAX_PATH ]; GetModuleFileName( AfxGetApp()->m_hInstance, filename, _MAX_PATH ); ver.GetFileVersionInfo(filename); // get version from VS_FIXEDFILEINFO struct CString s; s.Format("Version: %d.%d.%d.%d\n", HIWORD(ver.dwFileVersionMS), LOWORD(ver.dwFileVersionMS), HIWORD(ver.dwFileVersionLS), LOWORD(ver.dwFileVersionLS)); CString version = ver.GetValue(_T("ProductVersion")); version.Remove(' '); version.Replace(",", "."); title = "MyApp - Version " + version; SetWindowText(title); return TRUE; // return TRUE unless you set the focus to a control } And here is the relevant header file: class HTMLAboutDlg : public CDHtmlDialog { DECLARE_DYNCREATE(HTMLAboutDlg) public: HTMLAboutDlg(CWnd* pParent = NULL); // standard constructor virtual ~HTMLAboutDlg(); // Overrides HRESULT OnButtonOK(IHTMLElement *pElement); HRESULT OnButtonCancel(IHTMLElement *pElement); // Dialog Data enum { IDD = IDD_DIALOG_ABOUT, IDH = IDR_HTML_HTMLABOUTDLG }; protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support virtual BOOL OnInitDialog(); DECLARE_MESSAGE_MAP() DECLARE_DHTML_EVENT_MAP() }; I can't figure out what is going on - why it works on some machins and crashes on others. Both have VS2008 installed EDIT: VS versions VISTA - no crashes 9.0.30729.1 SP 2003 server: (crashes) 9.0.21022.8 RTM

    Read the article

  • Freetype2 failing under WoW64

    - by Necrolis
    I built a tff to D3D texture function using freetype2(2.3.9) to generate grayscale maps from the fonts. it works great under native win32, however, on WoW64 it just explodes (well, FT_Done and FT_Load_Glyph do). from some debugging, it seems to be a problem with HeapFree as called by free from FT_Free. I know it should work, as games like WCIII, which to the best of my knowledge use freetype2, run fine, this is my code, stripped of the D3D code(which causes no problems on its own): FT_Face pFace = NULL; FT_Error nError = 0; FT_Byte* pFont = static_cast<FT_Byte*>(ARCHIVE_LoadFile(pBuffer,&nSize)); if((nError = FT_New_Memory_Face(pLibrary,pFont,nSize,0,&pFace)) == 0) { FT_Set_Char_Size(pFace,nSize << 6,nSize << 6,96,96); for(unsigned char c = 0; c < 95; c++) { if(!FT_Load_Glyph(pFace,FT_Get_Char_Index(pFace,c + 32),FT_LOAD_RENDER)) { FT_Glyph pGlyph; if(!FT_Get_Glyph(pFace->glyph,&pGlyph)) { LOG("GET: %c",c + 32); FT_Glyph_To_Bitmap(&pGlyph,FT_RENDER_MODE_NORMAL,0,1); FT_BitmapGlyph pGlyphMap = reinterpret_cast<FT_BitmapGlyph>(pGlyph); FT_Bitmap* pBitmap = &pGlyphMap->bitmap; const size_t nWidth = pBitmap->width; const size_t nHeight = pBitmap->rows; //add to texture atlas } } } } else { FT_Done_Face(pFace); delete pFont; return FALSE; } FT_Done_Face(pFace); delete pFont; return TRUE; } ARCHIVE_LoadFile returns blocks allocated with new. As a secondary question, I would like to render a font using pixel sizes, I came across FT_Set_Pixel_Sizes, but I'm unsure as to whether this stretches the font to fit the size, or bounds it to a size. what I would like to do is render all the glyphs at say 24px (MS Word size here), then turn it into a signed distance field in a 32px area. Update After much fiddling, I got a test app to work, which leads me to think the problems are arising from threading, as my code is running in a secondary thread. I have compiled freetype into a static lib using the multithread DLL, my app uses the multithreaded libs. gonna see if i can set up a multithreaded test. Also updated to 2.4.4, to see if the problem was a known but fixed bug, didn't help however. Update 2 After some more fiddling, it turns out I wasn't using the correct lib for 2.4.4 -.- after fixing that, the test app works 100%, but the main app still crashes when FT_Done_Face is called, still seems to be a crash in the memory heap management of windows. is it possible that there is a bug in freetype2 that makes it blow up under user threads?

    Read the article

  • Right edge border unpainted & theme drawing on non client area

    - by CodeVisio
    Basically, the problem concerns border flickering during window resizing on Windows. My first goal was to repositioning controls on a dialog during resizing of it. I think I got a good dynamic repositioning without almost any flickering during this operation, but here I'm talking about main window border flickering. However, I wasn't able to eliminate it at all. To simplify the example try to create a simple win32 app with default code VS provided. I'm testing it on Window 7(64bit) with the default theme (Windows 7 basic, no transparency) and VS2008. 1) Do not add code to the app. 2) run it in debug mode. 3) Drag the left edge of the window slowly toward the left of the screen and at the same time keep an eye on the right edge border of the window. You should see a redrawing taking in action. 4) Repeat step 3 moving rapidly the mouse, you should see the flickering on the right edge more clearly. If you invert the edge, that is moving the right edge of the window, then the left edge stay firmly there without unpainted regions. The same process happens for the top edge border vs. the bottom one. Now, enable he Classic Theme (that similar to Win2000) and repeat again the steps above. The right edge is perfectly there without flickering at all! If you keep an eye in the Output Window of Visual Studio when you run in debug mode you should see a list of dll loaded together with your exe. If you run in debug mode with the default theme you will see uxtheme.dll loaded. On the contrary, with classic theme enabled the uxtheme.dll is not loaded (dwmapi.dll is always loaded). Probably uxtheme.dll is loaded at runtime, based on desktop settings and it takes in action for redrawing your windows non-client area. Another trick you could use to see the effect of this flickering is to add a case for WM_NCPAINT and return 0 instead to call the DefWindowProc(). Repeating the steps above and moving fast you should see a big part of the right edge of the window completely erased by background windows. This doesn't happen for the top and bottom ones. Any idea to resolve this flickering? Thank you!

    Read the article

  • stringstream problem - vector iterator not dereferencable

    - by andreas
    Hello I've got a problem with the following code snippet. It is related to the stringstream "stringstream css(cv.back())" bit. If it is commented out the program will run ok. It is really weird, as I keep getting it in some of my programs, but if I just create a console project the code will run fine. In some of my Win32 programs it will and in some it won't (then it will return "vector iterator not dereferencable" but it will compile just fine). Any ideas at all would be really appreciated. Thanks! vector<double> cRes(2); vector<double> pRes(2); int readTimeVects2(vector<double> &cRes, vector<double> &pRes){ string segments; vector<string> cv, pv, chv, phv; ifstream cin("cm.txt"); ifstream pin("pw.txt"); ifstream chin("hm.txt"); ifstream phin("hw.txt"); while (getline(cin,segments,'\t')) { cv.push_back(segments); } while (getline(pin,segments,'\t')) { pv.push_back(segments); } while (getline(chin,segments,'\t')) { chv.push_back(segments); } while (getline(phin,segments,'\t')) { phv.push_back(segments); } cin.close(); pin.close(); chin.close(); phin.close(); stringstream phss(phv.front()); phss >> pRes[0]; phss.clear(); stringstream chss(chv.front()); chss >> cRes[0]; chss.clear(); stringstream pss(pv.back()); pss >> pRes[1]; pss.clear(); stringstream css(cv.back()); css >> cRes[1]; css.clear(); return 0; }

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • Using VCL for the web (intraweb) as a trick for adding web interface to a legacy non-tiered (2 tiers

    - by user193655
    My team is maintaining a huge Client Server win32 Delphi application. It is a client/server application (Thick client) that uses DevArt (SDAC) components to connect to SQL Server. The business logic is often "trapped" in Component's event handlers, anyway with some degree of refactoring it is doable to move the business logic in common units (a big part of this work has already been done during refactoring... Maintaing legacy applications someone else wrote is very frustrating, but this is a very common job). Now there is the request of a web interface, I have several options of course, in this question i want to focus on the VCL for the web (intraweb) option. The idea is to use the common code (the same pas files) for both the client/server application and the web application. I heard of many people that moved legacy apps from delphi to intraweb, but here I am trying to keep the Thick client too. The idea is to use common code, may be with some compiler directives to write specific code: {$IFDEF CLIENTSERVER} {here goes the thick client specific code} {$ELSE} {here goes the Intraweb specific code} {$ENDIF} Then another problem is the "migration plan", let's say I have 300 features and on the first release I will have only 50 of them available in the web application. How to keep track of it? I was thinking of (ab)using Delphi interfaces to handle this. For example for the User Authentication I could move all the related code in a procedure and declare an interface like: type IUserAuthentication= interface['{0D57624C-CDDE-458B-A36C-436AE465B477}'] procedure UserAuthentication; end; In this way as I implement the IUserAuthentication interface in both the applications (Thick Client and Intraweb) I know that That feature has been "ported" to the web. Anyway I don't know if this approach makes sense. I made a prototype to simulate the whole process. It works for a "Hello world" application, but I wonder if it makes sense on a large application or this Interface idea is only counter-productive and can backfire. My question is: does this approach make sense? (the Interface idea is just an extra idea, it is not so important as the common code part described above) Is it a viable option? I understand it depends a lot of the kind of application, anyway to be generic my one is in the CRM/Accounting domain, and the number of concurrent users on a single installation is typically less than 20 with peaks of 50. EXTRA COMMENT (UPDATE): I ask this question because since I don't have a n-tier application I see Intraweb as the unique option for having a web application that has common code with the thick client. Developing webservices from the Delphi code makes no sense in my specific case, so the alternative I have is to write the web interface using ASP.NET (duplicating the business logic), but in this case I cannot take advantage of the common code in an easy way. Yes I could use dlls maybe, but my code is not suitable for that.

    Read the article

  • WPF - Random hanging with file browser attached behaviour.

    - by Stimul8d
    Hi, I have an attached behavior defined thusly,.. public static class FileBrowserBehaviour { public static bool GetBrowsesOnClick(DependencyObject obj) { return (bool)obj.GetValue(BrowsesOnClickProperty); } public static void SetBrowsesOnClick(DependencyObject obj, bool value) { obj.SetValue(BrowsesOnClickProperty, value); } // Using a DependencyProperty as the backing store for BrowsesOnClick. This enables animation, styling, binding, etc... public static readonly DependencyProperty BrowsesOnClickProperty = DependencyProperty.RegisterAttached("BrowsesOnClick", typeof(bool), typeof(FileBrowserBehaviour), new FrameworkPropertyMetadata(false, new PropertyChangedCallback(BrowsesOnClickChanged))); public static void BrowsesOnClickChanged(DependencyObject obj, DependencyPropertyChangedEventArgs args) { FrameworkElement fe = obj as FrameworkElement; if ((bool)args.NewValue) { fe.PreviewMouseLeftButtonDown += new MouseButtonEventHandler(OpenFileBrowser); } else { fe.PreviewMouseLeftButtonDown -= new MouseButtonEventHandler(OpenFileBrowser); } } static void OpenFileBrowser(object sender, MouseButtonEventArgs e) { var tb = sender as TextBox; if (tb.Text.Length < 1 || tb.Text=="Click to browse..") { OpenFileDialog ofd = new OpenFileDialog(); ofd.Filter = "Executables | *.exe"; if (ofd.ShowDialog() == true) { Debug.WriteLine("Setting textbox text-" + ofd.FileName); tb.Text = ofd.FileName; Debug.WriteLine("Set textbox text"); } } } } It's a nice simple attached behavior which pops open an OpenFileDialog when you click on a textbox and puts the filename in the box when you're done. It works maybe 40% of the time but the rest of the time the whole app hangs. The call stack at this point looks like this - [Managed to Native Transition] WindowsBase.dll!MS.Win32.UnsafeNativeMethods.GetMessageW(ref System.Windows.Interop.MSG msg, System.Runtime.InteropServices.HandleRef hWnd, int uMsgFilterMin, int uMsgFilterMax) + 0x15 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.GetMessage(ref System.Windows.Interop.MSG msg, System.IntPtr hwnd, int minMessage, int maxMessage) + 0x48 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.PushFrameImpl(System.Windows.Threading.DispatcherFrame frame = {System.Windows.Threading.DispatcherFrame}) + 0x8b bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.PushFrame(System.Windows.Threading.DispatcherFrame frame) + 0x49 bytes WindowsBase.dll!System.Windows.Threading.Dispatcher.Run() + 0x4c bytes PresentationFramework.dll!System.Windows.Application.RunDispatcher(object ignore) + 0x1e bytes PresentationFramework.dll!System.Windows.Application.RunInternal(System.Windows.Window window) + 0x6f bytes PresentationFramework.dll!System.Windows.Application.Run(System.Windows.Window window) + 0x26 bytes PresentationFramework.dll!System.Windows.Application.Run() + 0x19 bytes Debugatron.exe!Debugatron.App.Main() + 0x5e bytes C# [Native to Managed Transition] [Managed to Native Transition] mscorlib.dll!System.AppDomain.nExecuteAssembly(System.Reflection.Assembly assembly, string[] args) + 0x19 bytes mscorlib.dll!System.Runtime.Hosting.ManifestRunner.Run(bool checkAptModel) + 0x6e bytes mscorlib.dll!System.Runtime.Hosting.ManifestRunner.ExecuteAsAssembly() + 0x84 bytes mscorlib.dll!System.Runtime.Hosting.ApplicationActivator.CreateInstance(System.ActivationContext activationContext, string[] activationCustomData) + 0x65 bytes mscorlib.dll!System.Runtime.Hosting.ApplicationActivator.CreateInstance(System.ActivationContext activationContext) + 0xa bytes mscorlib.dll!System.Activator.CreateInstance(System.ActivationContext activationContext) + 0x3e bytes Microsoft.VisualStudio.HostingProcess.Utilities.dll!Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssemblyDebugInZone() + 0x23 bytes mscorlib.dll!System.Threading.ThreadHelper.ThreadStart_Context(object state) + 0x66 bytes mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state) + 0x6f bytes mscorlib.dll!System.Threading.ThreadHelper.ThreadStart() + 0x44 bytes Now, I've seen this kind of thing before when doing some asynchronous stuff but there's none of that going on at that point. The only thread alive is the UI thread! Also, I always get that last debug statement when it does hang. Can anyone point me in the right direction? This one's driving me crazy!

    Read the article

  • Another C datatypes question

    - by b-gen-jack-o-neill
    Hello. Well, I completely get the most basic datatypes of C, like short, int, long, float, to be exact, all numerical types.These types are needed to be known perform right operations with right numbers. For example to use FPU to add two float numbers. So the compiler must know what the type is. But, when it comes to characters I am little bit off. I know that basic C datatype char is there for ASCII characters coding. But what I don´t know is, why you even need another datatype for characters. Why could not you just use 1 byte integer value to store ASCII character. If you call printf, you apecify the datatype in the call, so you could say to printf that the integer represents ASCII character. I dont know how cout resolves datatype, but I guess you could just specify it somehow. Another thing is, when you want to use Unicode, you must use datatype wchar. But, what if I would like to use some another, for example ISO, or Windows coding instead of UTF? Becouse wchar codes characters as UTF-16 or UTF-32 (I read its compiler specific). And, what if I would want to use for example some imaginary new 8 byte text coding? What datatype should I use for it? I am actually pretty confused of this, becouse I always expected that if I want to use UTF-32 instead of ASCII, I just tell compiler "get UTF-32 value of the character I typed and save it into 4 char field." I thought that text coding is to be dealt with by the end, print function for example. That I just need to specify the coding for the compiler to use, since Windows doesent use ASCII in win32 apps, I guess C compiler must convert the char I typed to ASCII from whatever the type is that windows sends to the C editor. And the last thing is, what if I want to use for example 25 Byte integer for some high math operations? C has no specify-yourself datatype. Yes, I know that this would be difficult since all the math operations would need to be changed, becouse CPU can not add 25 Bytes numbers together. But is there a way to do it? Or is there some math library for it? What if I want to compute Pi to 1000000000000000 digits? :) I know my question is pretty long, but I just wanted to explain my thoughts the best I can in English, since its not my native language it is difficult. And I believe there is simple answer to my question(s), something I missed that explains everything. I read lot about text coding, C tutorials, but nothing about his. Thank you for your time.

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • Is there an easier way of creating a registry volatile subkey in .net?

    - by Simon
    So far I have the below which is taken from http://www.danielmoth.com/Blog/volatile-registrykey.aspx public static class RegistryHelper { public static RegistryKey CreateVolatileSubKey(RegistryKey rk, string subkey, RegistryKeyPermissionCheck permissionCheck) { var rk2 = rk.GetType(); const BindingFlags bfStatic = BindingFlags.NonPublic | BindingFlags.Static; const BindingFlags bfInstance = BindingFlags.NonPublic | BindingFlags.Instance; rk2.GetMethod("ValidateKeyName", bfStatic).Invoke(null, new object[] { subkey }); rk2.GetMethod("ValidateKeyMode", bfStatic).Invoke(null, new object[] { permissionCheck }); rk2.GetMethod("EnsureWriteable", bfInstance).Invoke(rk, null); subkey = (string)rk2.GetMethod("FixupName", bfStatic).Invoke(null, new object[] { subkey }); if (!(bool)rk2.GetField("remoteKey", bfInstance).GetValue(rk)) { var key = (RegistryKey)rk2.GetMethod("InternalOpenSubKey", bfInstance, null, new[] { typeof(string), typeof(bool) }, null).Invoke(rk, new object[] { subkey, permissionCheck != RegistryKeyPermissionCheck.ReadSubTree }); if (key != null) { rk2.GetMethod("CheckSubKeyWritePermission", bfInstance).Invoke(rk, new object[] { subkey }); rk2.GetMethod("CheckSubTreePermission", bfInstance).Invoke(rk, new object[] { subkey, permissionCheck }); rk2.GetField("checkMode", bfInstance).SetValue(key, permissionCheck); return key; } } rk2.GetMethod("CheckSubKeyCreatePermission", bfInstance).Invoke(rk, new object[] { subkey }); int lpdwDisposition; IntPtr hkResult; var srh = Type.GetType("Microsoft.Win32.SafeHandles.SafeRegistryHandle"); var temp = rk2.GetField("hkey", bfInstance).GetValue(rk); var rkhkey = (SafeHandleZeroOrMinusOneIsInvalid)temp; var getregistrykeyaccess = (int)rk2.GetMethod("GetRegistryKeyAccess", bfStatic, null, new[] { typeof(bool) }, null).Invoke(null, new object[] { permissionCheck != RegistryKeyPermissionCheck.ReadSubTree }); var errorCode = RegCreateKeyEx(rkhkey, subkey, 0, null, 1, getregistrykeyaccess, IntPtr.Zero, out hkResult, out lpdwDisposition); var keyNameField = rk2.GetField("keyName", bfInstance); var rkkeyName = (string)keyNameField.GetValue(rk); if (errorCode == 0 && hkResult.ToInt32() > 0) { var rkremoteKey = (bool)rk2.GetField("remoteKey", bfInstance).GetValue(rk); var hkResult2 = srh.GetConstructor(BindingFlags.Instance | BindingFlags.NonPublic, null, new[] { typeof(IntPtr), typeof(bool) }, null).Invoke(new object[] { hkResult, true }); var key2 = (RegistryKey)rk2.GetConstructor(BindingFlags.Instance | BindingFlags.NonPublic, null, new[] { hkResult2.GetType(), typeof(bool), typeof(bool), typeof(bool), typeof(bool) }, null).Invoke(new[] { hkResult2, permissionCheck != RegistryKeyPermissionCheck.ReadSubTree, false, rkremoteKey, false }); rk2.GetMethod("CheckSubTreePermission", bfInstance).Invoke(rk, new object[] { subkey, permissionCheck }); rk2.GetField("checkMode", bfInstance).SetValue(key2, permissionCheck); if (subkey.Length == 0) { keyNameField.SetValue(key2, rkkeyName); } else { keyNameField.SetValue(key2, rkkeyName + @"\" + subkey); } key2.Close(); return rk.OpenSubKey(subkey, true); } if (errorCode != 0) rk2.GetMethod("Win32Error", bfInstance).Invoke(rk, new object[] { errorCode, rkkeyName + @"\" + subkey }); return null; } [DllImport("advapi32.dll", CharSet = CharSet.Auto)] private static extern int RegCreateKeyEx(SafeHandleZeroOrMinusOneIsInvalid hKey, string lpSubKey, int reserved, string lpClass, int dwOptions, int samDesigner, IntPtr lpSecurityAttributes, out IntPtr hkResult, out int lpdwDisposition); } Which works but is fairly ugly. Is there a better way?

    Read the article

  • How to enable UAC prompt through programming

    - by peter
    I want to implement a UAC prompt for an application in visualc++ the operating system is 32bit x7460(2processor) Windowsserver 2008 the exe is myproject.exe through manifest.. Here for testing i wl build the application in Windows XP OS and copy the exe in to system containg the Windowsserver 2008 machine and replace it So what i did is i added a manifest like this name of that is myproject.exe.manifest My project has 3 folders like Headerfile,Resourcefile and Source file.I added this manifest(myproject.exe.manifest) in the Sourcefile folder containing other cpp and c code <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="11.1.4.0" processorArchitecture="X7460" name="myproject" type="win32"/> <description>myproject Problem</description> <!-- Identify the application security requirements. --> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly> then i added this line of code in Resourcefile(.rc).Means one header file is there(Myproject.h).I added the line of code there #define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" Finally i did the following step 1. Open your project in Microsoft Visual Studio 2005. 2. Under Project, select Properties. 3. In Properties, select Manifest Tool, and then select Input and Output. 4. Add in the name of your application manifest file under Additional manifest files. 5. Rebuild your application. But i am getting lot of Syntax errors Is there any problems in the way which i followed.If i commented the line define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" which added in Myproject.h for adding values in .rc file there willnot any error other than this general error c1010070: Failed to load and parse the manifest. The system cannot find the file specified. .\myproject.exe.manifest How to enable UAC prompt through programming

    Read the article

  • Good style for handling constructor failure of critical object

    - by mtlphil
    I'm trying to decide between two ways of instantiating an object & handling any constructor exceptions for an object that is critical to my program, i.e. if construction fails the program can't continue. I have a class SimpleMIDIOut that wraps basic Win32 MIDI functions. It will open a MIDI device in the constructor and close it in the destructor. It will throw an exception inherited from std::exception in the constructor if the MIDI device cannot be opened. Which of the following ways of catching constructor exceptions for this object would be more in line with C++ best practices Method 1 - Stack allocated object, only in scope inside try block #include <iostream> #include "simplemidiout.h" int main() { try { SimpleMIDIOut myOut; //constructor will throw if MIDI device cannot be opened myOut.PlayNote(60,100); //..... //myOut goes out of scope outside this block //so basically the whole program has to be inside //this block. //On the plus side, it's on the stack so //destructor that handles object cleanup //is called automatically, more inline with RAII idiom? } catch(const std::exception& e) { std::cout << e.what() << std::endl; std::cin.ignore(); return 1; } std::cin.ignore(); return 0; } Method 2 - Pointer to object, heap allocated, nicer structured code? #include <iostream> #include "simplemidiout.h" int main() { SimpleMIDIOut *myOut; try { myOut = new SimpleMIDIOut(); } catch(const std::exception& e) { std::cout << e.what() << std::endl; delete myOut; return 1; } myOut->PlayNote(60,100); std::cin.ignore(); delete myOut; return 0; } I like the look of the code in Method 2 better, don't have to jam my whole program into a try block, but Method 1 creates the object on the stack so C++ manages the object's life time, which is more in tune with RAII philosophy isn't it? I'm still a novice at this so any feedback on the above is much appreciated. If there's an even better way to check for/handle constructor failure in a siatuation like this please let me know.

    Read the article

  • Registry Problem

    - by Dominik
    I made a launcher for my game server. (World of Warcraft) I want to get the installpath of the game, browsed by the user. I'm using this code to browse, and get the installpath, then set some other strings from the installpath string, then just strore in my registry key. using System; using System.Drawing; using System.Reflection; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; using Microsoft.Win32; using System.IO; using System.Net.NetworkInformation; using System.Diagnostics; using System.Runtime; using System.Runtime.InteropServices; using System.Security; using System.Security.Cryptography; using System.Text; using System.Net; using System.Linq; using System.Net.Sockets; using System.Collections.Generic; using System.Threading; namespace WindowsFormsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } string InstallPath, WoWExe, PatchPath; private void Form1_Load(object sender, EventArgs e) { RegistryKey LocalMachineKey_Existence; MessageBox.Show("Browse your install location.", "Select Wow.exe"); OpenFileDialog BrowseInstallPath = new OpenFileDialog(); BrowseInstallPath.Filter = "wow.exe|*.exe"; if (BrowseInstallPath.ShowDialog() == DialogResult.OK) { InstallPath = System.IO.Path.GetDirectoryName(BrowseInstallPath.FileName); WoWExe = InstallPath + "\\wow.exe"; PatchPath = InstallPath + "\\Data\\"; LocalMachineKey_Existence = Registry.LocalMachine.CreateSubKey(@"SOFTWARE\ExistenceWoW"); LocalMachineKey_Existence.SetValue("InstallPathLocation", InstallPath); LocalMachineKey_Existence.SetValue("PatchPathLocation", PatchPath); LocalMachineKey_Existence.SetValue("WoWExeLocation", WoWExe); } } } } The problem is: On some computer, it doesnt stores like it should be. For example, your wow.exe is in C:\ASD\wow.exe, your select it with the browse windows, then the program should store it in the Existence registry key as C:\ASD\Data\ but it stores like this: C:\ASDData , so it forgots a backslash :S Look at this picture: http://img21.imageshack.us/img21/2829/regedita.jpg My program works cool on my PC, and on my friends pc, but on some pc this "bug" comes out :S I have windows 7, with .NEt 3.5 Please help me.

    Read the article

  • Access violation when running native C++ application that uses a /clr built DLL

    - by doobop
    I'm reorganzing a legacy mixed (managed and unmanaged DLLs) application so that the main application segment is unmanaged MFC and that will call a C++ DLL compiled with /clr flag that will bridge the communication between the managed (C# DLLs) and unmanaged code. Unfortuantely, my changed have resulted in an Access violation that occurs before the application InitInstance() is called. This makes it very difficult to debug. The only information I get is the following stack trace. > 64006108() ntdll.dll!_ZwCreateMutant@16() + 0xc bytes kernel32.dll!_CreateMutexW@12() + 0x7a bytes So, here are some sceanrios I've tried. - Turned on Exceptions-Win32 Exceptions-c0000005 Access Violation to break when Thrown. Still the most detail I get is from the above stack trace. I've tried the application with F10, but it fails before any breakpoints are hit and fails with the above stack trace. - I've stubbed out the bridge DLL so that it only has one method that returns a bool and that method is coded to just return false (no C# code called). bool DllPassthrough::IsFailed() { return false; } If the stubbed out DLL is compiled with the /clr flag, the application fails. If it is compiled without the /clr flag, the application runs. - I've created a stub MFC application using the Visual Studio wizard for multidocument applications and call DllPassthrough::IsFailed(). This succeeds even with the /clr flag used to compile the DLL. - I've tried doing a manual LoadLibrary on winmm.lib as outlined in the following note Access violation when using c++/cli. The application still fails. So, my questions are how to solve the problem? Any hints, strategies, or previous incidents. And, failing that, how can I get more information on what code segment or library is causing the access exception? If I try more involved workarounds like doing LoadLibrary calls, I'd like to narrow it to the failing libraries. Thanks. BTW, we are using Visual Studio 2008 and the project is being built against the .NET 2.0 framework for the managed sections.

    Read the article

  • issues regarding UAC prompt

    - by peter
    I want to implement a UAC prompt for an application in visualc++ the operating system is 32bit x7460(2processor) Windowsserver 2008 the exe is myproject.exe through manifest.. Here for testing i wl build the application in Windows XP OS and copy the exe in to system containg the Windowsserver 2008 machine and replace it So what i did is i added a manifest like this name of that is myproject.exe.manifest My project has 3 folders like Headerfile,Resourcefile and Source file.I added this manifest(myproject.exe.manifest) in the Sourcefile folder containing other cpp and c code <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="4.0" processorArchitecture="X7460" name="myproject" type="win32"/> <description>myproject Problem</description> <!-- Identify the application security requirements. --> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly> then i added this line of code in Resourcefile(.rc).Means one header file is there(Myproject.h).I added the line of code there #define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" Finally i did the following step Under Project, select Properties. 3. In Properties, select Manifest Tool, and then select Input and Output. 4. Add in the name of your application manifest file under Additional manifest files. 5. Rebuild your application. But i am getting lot of Syntax errors Is there any problems in the way which i followed.If i commented the line #define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" which added in Myproject.h for adding values in .rc file there willnot any error other than this general error c1010070: Failed to load and parse the manifest. The system cannot find the file specified. .\myproject.exe.manifest How to enable UAC prompt through programming

    Read the article

  • Suggest an alternative way to organize/build a database solution.

    - by Hamish Grubijan
    We are using Visual Studio 2010, but this was first conceived with VS2003. I will forward the best suggestions to my team. The current setup almost makes me vomit. It is a C# solution with most projects containing .sql files. Because we support Microsoft, Oracle, and Sybase, and so home-brewed a pre-processor, much like C preprocessor, except that substitutions are performed by a home-brewed C# program without using yacc and tools like that. #ifdefs are used for conditional macro definitions, and yeah - macros are the way this is done. A macro can expand to another macro or two, but this should eventually terminate. Only macros have #ifdef in them - the rest of the SQL-like code just uses these macros. Now, the various configurations: Debug, MNDebug, MNRelease, Release, SQL_APPLY_ALL, SQL_APPLY_MSFT, SQL_APPLY_ORACLE, SQL_APPLY_SYBASE, SQL_BUILD_OUTPUT_ALL, SQL_COMPILE, as well as 2 more. Also: Any CPU, Mixed Platforms, Win32. What drives me nuts is having to configure it correctly as well as choosing the right one out of 12 x 3 = 36 configurations as well as having to substitute database name depending on the type of database: config, main, or gateway. I am thinking that configuration should be reduced to just Debug, Release, and SQL_APPLY. Also, using 0, 1, and 2 seems so 80s ... Finally, I think my intention to build or not to build 3 types of databases for 3 types of vendors should be configured with just a tic tac toe board like: XOX OOX XXX In this case it would mean build MSFT+CONFIG, all SYBASE, and all GATEWAY. Still, the overall thing which uses a text file and a pre-processor and many configurations seems incredibly clunky. It is year 2010 now and someone out there is bound to have a very clean and/or creative tool/solution. The only pro would be that the existing collection of macros has been well tested. Have you ever had to write SQL that would work for several vendors? How did you do it? SqlVars.txt (Every one of 30 users makes a copy of a template and modifies this to suit their needs): // This is the default parameters file and should not be changed. // You can overwrite any of these parameters by copying the appropriate // section to override into SqlVars.txt and providing your own information. //Build types are 0-Config, 1-Main, 2-Gateway BUILD_TYPE=1 REMOVE_COMMENTS=1 // Login information used when applying to a Microsoft SQL server database SQL_APPLY_MSFT_version=SQL2005 SQL_APPLY_MSFT_database=msftdb SQL_APPLY_MSFT_server=ABC SQL_APPLY_MSFT_user=msftusr SQL_APPLY_MSFT_password=msftpwd // Login information used when applying to an Oracle database SQL_APPLY_ORACLE_version=ORACLE10g SQL_APPLY_ORACLE_server=oradb SQL_APPLY_ORACLE_user=orausr SQL_APPLY_ORACLE_password=orapwd // Login information used when applying to a Sybase database SQL_APPLY_SYBASE_version=SYBASE125 SQL_APPLY_SYBASE_database=sybdb SQL_APPLY_SYBASE_server=sybdb SQL_APPLY_SYBASE_user=sybusr SQL_APPLY_SYBASE_password=sybpwd ... (THIS GOES ON)

    Read the article

  • How to pipe two CORE::system commands in a cross-platform way

    - by Pedro Silva
    I'm writing a System::Wrapper module to abstract away from CORE::system and the qx operator. I have a serial method that attempts to connect command1's output to command2's input. I've made some progress using named pipes, but POSIX::mkfifo is not cross-platform. Here's part of what I have so far (the run method at the bottom basically calls system): package main; my $obj1 = System::Wrapper->new( interpreter => 'perl', arguments => [-pe => q{''}], input => ['input.txt'], description => 'Concatenate input.txt to STDOUT', ); my $obj2 = System::Wrapper->new( interpreter => 'perl', arguments => [-pe => q{'$_ = reverse $_}'}], description => 'Reverse lines of input input', output => { '>' => 'output' }, ); $obj1->serial( $obj2 ); package System::Wrapper; #... sub serial { my ($self, @commands) = @_; eval { require POSIX; POSIX->import(); require threads; }; my $tmp_dir = File::Spec->tmpdir(); my $last = $self; my @threads; push @commands, $self; for my $command (@commands) { croak sprintf "%s::serial: type of args to serial must be '%s', not '%s'", ref $self, ref $self, ref $command || $command unless ref $command eq ref $self; my $named_pipe = File::Spec->catfile( $tmp_dir, int \$command ); POSIX::mkfifo( $named_pipe, 0777 ) or croak sprintf "%s::serial: couldn't create named pipe %s: %s", ref $self, $named_pipe, $!; $last->output( { '>' => $named_pipe } ); $command->input( $named_pipe ); push @threads, threads->new( sub{ $last->run } ); $last = $command; } $_->join for @threads; } #... My specific questions: Is there an alternative to POSIX::mkfifo that is cross-platform? Win32 named pipes don't work, as you can't open those as regular files, neither do sockets, for the same reasons. The above doesn't quite work; the two threads get spawned correctly, but nothing flows across the pipe. I suppose that might have something to do with pipe deadlocking or output buffering. What throws me off is that when I run those two commands in the actual shell, everything works as expected.

    Read the article

  • How to remove .zip file in c on windows? (error: Directory not empty)

    - by ExtremeBlue
    include include include include "win32-dirent.h" include include include define MAXFILEPATH 1024 bool IsDirectory(char* path) { WIN32_FIND_DATA w32fd; HANDLE hFindFile; hFindFile = FindFirstFile((PTCHAR)path, &w32fd); if(hFindFile == INVALID_HANDLE_VALUE) { return false; } return w32fd.dwFileAttributes & (FILE_ATTRIBUTE_DIRECTORY); } int RD(const char* folderName) { DIR *dir; struct dirent *ent; dir = opendir(folderName); if(dir != NULL) { while((ent = readdir(dir)) != NULL) { if(strcmp(ent->d_name , ".") == 0 || strcmp(ent->d_name, "..") == 0) { continue; } char fileName[MAXFILEPATH]; sprintf(fileName,"%s%c%s", folderName, '\\', ent->d_name); if(IsDirectory(fileName)) { RD(fileName); } else { unlink(fileName); } } closedir(dir); //chmod(folderName, S_IWRITE | S_IREAD); if(_rmdir(folderName) != 0)perror(folderName); } else { printf("%s <%s>\n","Could Not Open Directory.", folderName); return -1; } return 0; } int main(int argc, char* argv[]) { if(argc < 2) { printf("usage: ./a.out \n"); return 1; } //RD(argv[1]); //_mkdir("12"); //_mkdir("12\\34"); //_rmdir("12\\34"); //_rmdir("12"); char buf[0xff]; sprintf(buf, "unzip -x -q -d 1234 1234.zip"); system(buf); RD("1234"); //unlink("D:\\dev\\c\\project\\removeFolder\\Debug\\1234\\56\\5.txt"); //unlink("D:\\dev\\c\\project\\removeFolder\\Debug\\1234\\56\\6.txt"); //unlink("D:\\dev\\c\\project\\removeFolder\\Debug\\1234\\1_23.zip"); //unlink("D:\\dev\\c\\project\\removeFolder\\Debug\\1234\\4.txt"); //_rmdir("D:\\dev\\c\\project\\removeFolder\\Debug\\1234\\56"); //_rmdir("D:\\dev\\c\\project\\removeFolder\\Debug\\1234"); return 0; } Archive: 1234.zip inflating: 1234/4.txt inflating: 1234/56/5.txt inflating: 1234/56/6.txt inflating: 1234/1_23.zip

    Read the article

  • (Strange) C++ linker error in constructor

    - by Microkernel
    I am trying to write a template class in C++ and getting this strange linker error and can't figureout the cause, please let me know whats wrong with this! Here is the error message I am getting in Visula C++ 2010. 1>------ Rebuild All started: Project: FlashEmulatorTemplates, Configuration: Debug Win32 ------ 1> main.cpp 1> emulator.cpp 1> Generating Code... 1>main.obj : error LNK2019: unresolved external symbol "public: __thiscall flash_emulator<char>::flash_emulator<char>(char const *,struct FLASH_PROPERTIES *)" (??0?$flash_emulator@D@@QAE@PBDPAUFLASH_PROPERTIES@@@Z) referenced in function _main 1>C:\Projects\FlashEmulator_templates\VS\FlashEmulatorTemplates\Debug\FlashEmulatorTemplates.exe : fatal error LNK1120: 1 unresolved externals ========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ========== Error message in g++ main.cpp: In function âint main()â: main.cpp:8: warning: deprecated conversion from string constant to âchar*â /tmp/ccOJ8koe.o: In function `main': main.cpp:(.text+0x21): undefined reference to `flash_emulator<char>::flash_emulator(char*, FLASH_PROPERTIES*)' collect2: ld returned 1 exit status There are 2 .cpp files and 1 header file, and I have given them below. emulator.h #ifndef __EMULATOR_H__ #define __EMULATOR_H__ typedef struct { int property; }FLASH_PROPERTIES ; /* Flash emulation class */ template<class T> class flash_emulator { private: /* Private data */ int key; public: /* Constructor - Opens an existing flash by name flashName or creates one with given FLASH_PROPERTIES if it doesn't exist */ flash_emulator( const char *flashName, FLASH_PROPERTIES *properties ); /* Constructor - Opens an existing flash by name flashName or creates one with given properties given in configFIleName */ flash_emulator<T>( char *flashName, char *configFileName ); /* Destructor for the emulator */ ~flash_emulator(){ } }; #endif /* End of __EMULATOR_H__ */ emulator.cpp #include <Windows.h> #include "emulator.h" using namespace std; template<class T>flash_emulator<T>::flash_emulator( const char *flashName, FLASH_PROPERTIES *properties ) { return; } template<class T>flash_emulator<T>::flash_emulator(char *flashName, char *configFileName) { return; } main.cpp #include <Windows.h> #include "emulator.h" int main() { FLASH_PROPERTIES properties = {0}; flash_emulator<char> myEmulator("C:\newEMu.flash", &properties); return 0; }

    Read the article

  • Remote Desktop disconnects after reaching "Estimating connection quality..."

    - by Sam Pearson
    I'm connecting to a Windows 8 machine from a Windows 7 machine. When I try to RDP in to the machine, it prompts me for my credentials, then zooms through the process of connecting until it reaches "Estimating connection quality." After a few seconds, it disconnects without giving any message whatsoever and returns me to the Remote Desktop Connection connect window. No error message, no popups, nothing. It just silently fails to connect after reaching "Estimating connection quality." How do I solve this issue?

    Read the article

  • Windows Server Backup "Reading Data; please wait..."

    - by Reafidy
    On windows Server 2008 R2 I have recently added the windows server backup (WSB) feature. Opening WSB I get the message "Reading Data; please wait...". This message fails to go away, even after leaving the server for over 12 hours. I also notice in the task manager that svchost.exe (username: networkservice) is using all available processing power. So I terminated that process and then WSB comes on-line. However after restarting the server and WSB the issue reoccurs. WSB also fails to recognize my store-in-go flash drive (2gb). What is the underlying problem here?

    Read the article

  • Group Policy drive maps fail with Error Code: 0x80070043

    - by Topherhead
    I'm running a Server 2008 R2 domain with all Windows 7 x64 bit client machines. All drives are mapped using Group Policy. Which were previously on a NAS We just built a new, huge, fast server. So I'm in the process of migrating all the network drives from the NAS to the new fileserver(fs). The old drive maps were mapped using group policy so I just went in and updated to the new server and selected the "Replace" option. But the drives just plain do not map. I do an RSOP on my machine and the error for the drive map is: Result: Failure (Error Code: 0x80070043) The other odd thing, though it may or may not have anything to do with it, is that the winning GPO shown is shown with its SID instead of its name. The SID is correct though. Accessing the shares through Explorer works fine, and mapping them manually works fine. Any ideas? Thanks Chris

    Read the article

  • HP ACU shows parity initialization failed (with screenshot)

    - by lbanz
    I put in a new drive due to a hard drive failure. When the rebuild got to 100%, the controller fails and I need to reboot the server to bring it online. I had to do this about three times and it eventually finished rebuilding. But I found that it says parity initialization status failed. I've left it for a few hours but it didn't seem to reinitialize. Then I ran the insight online diagnostic tools and it reported the disk that I put in reached read/write error threshold. So I'm beginning to think that the brand new disk I put in is faulty. Before I put in the disk, the parity initialization was at a finished state. Should I replace the new disk I put in? I'm very worried as I think the parity is broken. Or is there a way to kick start the initialization process?

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >