Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 771/903 | < Previous Page | 767 768 769 770 771 772 773 774 775 776 777 778  | Next Page >

  • Does SetFileBandwidthReservation affect memory-mapped file performance?

    - by Ghostrider
    Does this function affect Memory-mapped file performance? Here's the problem I need to solve: I have two applications competing for disk access: "reader" and "updater". Whole system runs on Windows Server 2008 R2 x64 "Updater" constantly accesses disk in a linear manner, updating data. They system is set up in such a way that updater always has infinite data to update. Consider that it is constantly approximating a solution of a huge set of equations that takes up entire 2TB disk drive. Updater uses ReadFile and WriteFile to process data in a linear fashion. "Reader" is occasionally invoked by user to get some pieces of data. Usually user would read several 4kb blocks from the drive and stop. Occasionally user needs to read up to 100mb sequentially. In exceptional cases up to several gigabytes. Reader maps files to memory to get data it needs. What I would like to achieve is for "reader" to have absolute priority so that "updater" would completely stop if needed so that "reader" could get the data user needs ASAP. Is this problem solvable by using SetPriorityClass and SetFileBandwidthReservation calls? I would really hate to put synchronization login in "reader" and "updater" and rather have the OS take care of priorities.

    Read the article

  • Cocoa Services Programming - How to identify whether the selected item is a folder or file wih NSFi

    - by rockybalboa
    Hi , I have some queries regarding Services menu validation . I would like to enable different services provided by my app based on whether a file or folder is selected in the Finder. I have set NSFilenamesPboardType as the send type for the services . I have gone through the - (id)validRequestorForSendType:(NSString *)sendType returnType:(NSString *)returnType method but my issue is that the validation there seems to be done based on the sendType and return type. In my case , the selected file and folder pasteboard type is the same and I cannot determine whether the selected item in the Finder is a file or folder during the validation process ( This is before the actual service gets invoked i.e when the services menu is being shown to the user ) ? So my question is that is there any way I can get some info about the selected item in the Finder and validate the different service menus offered by my application based on some info regarding the item rather than the basic validation of the send and return types ? I am not able to find out any manner to do so but "Folder Actions" service in Snow Leopard gets enabled only for folders so it can be done. I did a /System/Library/CoreServices/pbs -dump_pboard and it is using a NSFilenamePBoardType also yet manages to activate only for folders. Thanks in advace for any help .

    Read the article

  • [Cocoa] Core Animation with an NSView and subviews

    - by ndg
    I've subclassed NSView to create a 'container' view (which I've called TRTransitionView) which is being used to house two subviews. At the click of a button, I want to transition one subview out of the parent view and transition the other in, using the Core Animation transition type: kCATransitionPush. For the most part, I have this working as you'd expect (here's a basic test project I threw together). The issue I'm seeing relates to resizing my window and then toggling between my two views. After resizing a window, my subviews will appear at seemingly random locations within my TRTransitionView. Additionally, it appears as if the TRTransitionView hasn't stretched correctly and is clipping the contents of its subviews. Ideally, I would like subviews anchored to the top-left of their parent view at all times, and to also grow to expand the size of the parent view. The second issue relates to an NSTableView I've placed in my first subview. When my window is resized, and my TRTransitionView resizes to match its new dimensions, my TableView seems to resize its content quite awkwardly (the entire table seems to jolt around) and the newly expanded space that the table now occupies seems to 'flash' (as if in the process of being animated). Extremely difficult to describe, but is there any way to stop this? Here's my TRTransitionView class: -(void) awakeFromNib { [self setWantsLayer:YES]; [self addSubview:[self currentView]]; transition = [CATransition animation]; [transition setType:kCATransitionPush]; [transition setSubtype:kCATransitionFromLeft]; [self setAnimations: [NSDictionary dictionaryWithObject:transition forKey:@"subviews"]]; } - (void)setCurrentView:(NSView*)newView { if (!currentView) { currentView = newView; return; } [[self animator] replaceSubview:currentView with:newView]; currentView = newView; } -(IBAction) switchToViewOne:(id)sender { [transition setSubtype:kCATransitionFromLeft]; [self setCurrentView:viewOne]; } -(IBAction) switchToViewTwo:(id)sender { [transition setSubtype:kCATransitionFromRight]; [self setCurrentView:viewTwo]; }

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • ASP.NET MVC WAP, SharePoint Designer and SVN

    - by David Lively
    All, I'm starting a new ASP.NET MVC project which requires some content management capabilities. The people who will be managing the content prefer to use SharePoint Designer (successor to FrontPage) to modify content. I'd like to allow them to keep doing that. The issues are: Since I'd like this to be a WAP, not a website project, how can I allow them to see their changes in action without requiring them to have Visual Studio on their local machines? Can I specify a "default" action for a controller so that given a url like /products/new_view_here Can I let them save pages (views) and see them in the browser without having to go through the check-in/build/deploy process? I'd like their changes to be stored in SVN; SharePoint designer seems to only support Visual SourceSafe (ugh) directly. The ideas I've come up with so far are Write an HTTP handler that implements the FrontPage Server Extensions protocol. This sounds time consuming, but I haven't yet looked at the protocol spec. However, it would allow me to perform whatever operations I want on the server side, including checking files into SVN. Ditch the WAP in favor of a website project. I do not like having the source present on the server, however. Also, will MVC work in a website project? Surely someone has tackled this problem before?

    Read the article

  • Passing a URL as a URL parameter

    - by Andrea
    I am implementing OpenId login in a CakePHP application. At a certain point, I need to redirect to another action, while preserving the information about the OpenId identity, which is itself a URL (with GET parameters), for instance https://www.google.com/accounts/o8/id?id=31g2iy321i3y1idh43q7tyYgdsjhd863Es How do I pass this data? The first attempt would be function openid() { ... $this->redirect(array('controller' => 'users', 'action' => 'openid_create', $openid)); } but the obvious problem is that this completely messes up the way CakePHP parses URL parameters. I'd need to do either of the following: 1) encode the URL in a CakePHP friendly manner for passing it, and decoding it after that, or 2) pass the URL as a POST parameter but I don't know how to do this. EDIT: In response to comments, I should be more clear. I am using the OpenId component, and I have a working OpenId implementation. What I need to do is to link OpenId with an existing user system. When a new user logs in via OpenId, I ask for more details, and then create a new user with this data. The problem is that I have to keep the OpenId URL throughout this process.

    Read the article

  • Which software for intranet CMS - Django or Joomla?

    - by zalun
    In my company we are thinking of moving from wiki style intranet to a more bespoke CMS solution. Natural choice would be Joomla, but we have a specific architecture. There is a few hundred people who will use the system. System should be self explainable (easier than wiki). We use a lot of tools web, applications and integrated within 3rd party software. The superior element which is a glue for all of them is API. In example for the intranet tools we do use Django, but it's used without ORM, kind of limited to templates and url - every application has an adequate methods within our API. We do not use the Django admin interface, because it is hardly dependent on ORM. Because of that Joomla may be hard to integrate. Every employee should be able to edit most of the pages, authentication and privileges have to be managed by our API. How hard is it to plug Joomla to use a different authentication process? (extension only - no hacks) If one knows Django better than Joomla, should Django be used?

    Read the article

  • Modeling objects with multiple table relationships in Zend Framework

    - by andybaird
    I'm toying with Zend Framework and trying to use the "QuickStart" guide against a website I'm making just to see how the process would work. Forgive me if this answer is obvious, hopefully someone experienced can shed some light on this. I have three database tables: CREATE TABLE `users` ( `id` int(11) NOT NULL auto_increment, `email` varchar(255) NOT NULL, `username` varchar(255) NOT NULL default '', `first` varchar(128) NOT NULL default '', `last` varchar(128) NOT NULL default '', `gender` enum('M','F') default NULL, `birthyear` year(4) default NULL, `postal` varchar(16) default NULL, `auth_method` enum('Default','OpenID','Facebook','Disabled') NOT NULL default 'Default', PRIMARY KEY (`id`), UNIQUE KEY `email` (`email`), UNIQUE KEY `username` (`username`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `user_password` ( `user_id` int(11) NOT NULL, `password` varchar(16) NOT NULL default '', PRIMARY KEY (`user_id`), UNIQUE KEY `user_id` (`user_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `user_metadata` ( `user_id` int(11) NOT NULL default '0', `signup_date` datetime default NULL, `signup_ip` varchar(15) default NULL, `last_login_date` datetime default NULL, `last_login_ip` varchar(15) default NULL, PRIMARY KEY (`user_id`), UNIQUE KEY `user_id` (`user_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 I want to create a User model that uses all three tables in certain situations. E.g., the metadata table is accessed if/when the meta data is needed. The user_password table is accessed only if the 'Default' auth_method is set. I'll likely be adding a profile table later on that I would like to be able to access from the user model. What is the best way to do this with ZF and why?

    Read the article

  • How to push a new feature to a central Mercurial repo?

    - by Sly
    I'm assigned the development of a feature for a project. I'm going to work on that feature for several days over a period of a few weeks. I'll clone the central repo. Then I'm going to work locally for 3 weeks. I'll commit my progress to my repo several times during that process. When I'm done, I'm going to pull/merge/commit before I push. What is the right way push my feature as a single changeset to the central repo? I don't want to push 14 "work in progress" changesets and 1 "merged" changeset to the central repo. I want other collaborators on the project to see only one changeset with a significant commit message (such as "Implemented feature ABC"). I'm new to Mercurial and DVCS so don't hesitate to provide guidance if you think I'm not approaching that the right way. <My own answer> So far I came up with a way of reducing 15 changeset to 2 changeset. Suppose changesets 10 to 24 are "work in progress" changesets. I can 'hg collapse -r 10:24 -m "Implemented feature ABC"' (14 changesets collapsed into 1). Then, I must 'hg pull' + 'hg merge' + 'hg commit -m "Merged with most recent changes"'. But now I'm stuck with 2 changesets. I can no longer 'hg collapse', because pull/merge/commit broke my changeset sequence. Of course 2 changesets is better then 15 but still, I'd rather have 1 changeset. </My own answer>

    Read the article

  • Suggestions for a Cron like scheduler in Python?

    - by jamesh
    I'm looking for a library in Python which will provide at and cron like functionality. I'd quite like have a pure Python solution, rather than relying on tools installed on the box; this way I run on machines with no cron. For those unfamiliar with cron: you can schedule tasks based upon an expression like: 0 2 * * 7 /usr/bin/run-backup # run the backups at 0200 on Every Sunday 0 9-17/2 * * 1-5 /usr/bin/purge-temps # run the purge temps command, every 2 hours between 9am and 5pm on Mondays to Fridays. The cron time expression syntax is less important, but I would like to have something with this sort of flexibility. If there isn't something that does this for me out-the-box, any suggestions for the building blocks to make something like this would be gratefully received. Edit I'm not interested in launching processes, just "jobs" also written in Python - python functions. By necessity I think this would be a different thread, but not in a different process. To this end, I'm looking for the expressivity of the cron time expression, but in Python. Cron has been around for years, but I'm trying to be as portable as possible. I cannot rely on its presence.

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • [WEB] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess (thanks ILMV:]) up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • How to parse large xml files on google app engine?

    - by Alon Carmel
    Hey, I have fairly large xml file 1mb in size that i host on s3. I need to parse that xml file into my app engine datastore entirely. I have written a simple DOM parser that works fine locally but online it reaches the 30sec error and stops. I tried lowering the xml parsing by downloading the xml file into a BLOB at first before the parser then parse the xml file from blob. problem is that blobs are limited to 1mb. so it fails. I have multiple inserts to the datastore which cause it to fail on 30 sec. i saw somewhere that they recommend using the Mapper class and save some exception where the process stopped but as i am a python n00b i cant figure out how to implement it on a DOM parser or an SAX one (please provide an example?) on how to use it. i'm pretty much doing a bad thing right now and i parse the xml using php outside the app engine and push the data via HTTP post to the app engine using a proprietary API which works fine but is stupid and makes me maintain two codes. can you please help me out?

    Read the article

  • Use of Syntactic Sugar / Built in Functionality

    - by Kyle Rozendo
    I was busy looking deeper into things like multi-threading and deadlocking etc. The book is aimed at both pseudo-code and C code and I was busy looking at implementations for things such as Mutex locks and Monitors. This brought to mind the following; in C# and in fact .NET we have a lot of syntactic sugar for doing things. For instance (.NET 3.5): lock(obj) { body } Is identical to: var temp = obj; Monitor.Enter(temp); try { body } finally { Monitor.Exit(temp); } There are other examples of course, such as the using() {} construct etc. My question is when is it more applicable to "go it alone" and literally code things oneself than to use the "syntactic sugar" in the language? Should one ever use their own ways rather than those of people who are more experienced in the language you're coding in? I recall having to not use a Process object in a using block to help with some multi-threaded issues and infinite looping before. I still feel dirty for not having the using construct in there. Thanks, Kyle

    Read the article

  • How to rename goals in Maven?

    - by mjs
    In the Maven document Introduction to the Build Lifecycle, a goal of display:time is described that outputs the current time. The plugin is as follows: ... <plugin> <groupId>com.mycompany.example</groupId> <artifactId>maven-touch-plugin</artifactId> <version>1.0</version> <executions> <execution> <phase>process-test-resources</phase> <goals> <goal>timestamp</goal> </goals> </execution> </executions> </plugin> ... I have several questions relating to this plugin: How can I change the name of the goal to, for example, foo:bar? (Why does neither display nor time appear anywhere in the XML fragment? How can you tell, from looking at the fragment, what goals it defines?) How can I manually run this goal? (For similar constructs, the equivalent of mvn display:time sometimes works, but this doesn't work consistently.) How can I see if this goal exists? (i.e. list available goals; this question suggests this is impossible.)

    Read the article

  • to connect matlab with java

    - by user304005
    Through the below given code I was able to connect to matlab. But I was not able to execute the script file containing matlab code...Please help me to modify the code so as to execute the matlab code.... Here luck2 is a .m file.... import java.io.InputStreamReader; import java.io.*; public class matlab { private static File myMATLABScript; public static String runScript(File luck2) { String output = "" ; String error = ""; try { String commandToRun ="C:\\Program Files\\MATLAB\\R2009a\\bin\\matlab -nodisplay <" + "Z:\\sem\\java\\luck2"; //String commandToRun = "matlab -nosplash -r myMATLABScript -nodisplay -nodesktop < " + opentxt; System.out.println(commandToRun); Process p = Runtime.getRuntime().exec(commandToRun); String s; BufferedReader stdInput = new BufferedReader(new InputStreamReader(p.getInputStream())); BufferedReader stdError = new BufferedReader(new InputStreamReader(p.getErrorStream())); System.out.println("\nHere is the standard output of the command:\n"); while ((s = stdInput.readLine()) != null) { System.out.println("haiiiiiiiiiiii"); output = s + "\n"; System.out.println(s); } while ((s = stdError.readLine()) != null) { error = s + "\n"; System.out.println(s); } } catch (Exception e) { System.out.println("exception happened here what I know:"); e.printStackTrace(); System.exit(-1); } return output + error; } public static void main(String[] args) throws IOException { matlab m = new matlab(); matlab.runScript(myMATLABScript); //matlab.runScript(); } }

    Read the article

  • How to start AJAX in Zend?

    - by Awan
    I am working on some projects as a developer(PHP,MySQL) in which AJAX and jQuery is already implemented. But now I want to learn implementation of AJAX and jQuery stuff. Can anyone tell me the exact steps with explanation? I have created a project in Zend. There is only one controller(IndexController) and two actions(a and b) in my project now. Now I want to use ajax in my project. But I don't know how to start. I read some tutorial but unable to completely understand. I have index.phtml like this: <a href='index/a'>Load info A</a> <br/> <a href='index/b'>Load info B</a> <br /> <div id=one>load first here<div> <div id=two>load second here</div> Here index is controller in links. a and b are actions. now I have four files like this: a1.phtml I am a1 a2.phtml I am a2 b1.phtml I am b1 b2.phtml I am b2 I think you have got my point. When user clicks first link (Load info A) then a1.phtml should be loaded into div one and a2.phtml should be loaded into div two When user clicks second link (Load info B) then b1.phtml should be loaded into div one and b2.phtml should be loaded into div two And someone tell me the purpose of JSON in this process and how to use this also? Thanks

    Read the article

  • Browser relative positioning with jQuery and CutyCapt

    - by Acoustic
    I've been using CutyCapt to take screen shots of several web pages with great success. My challenge now is to paint a few dots on those screen shots that represent where a user clicked. CutyCapt goes through a process of resizing the web page to the scroll width before taking a screen shot. That's extremely useful because you only get content and not much (if any) of the page's background. My challenge is trying to map a user's mouse X coordinates to the screen shot. Obviously users have different screen resolutions and have their browser window open to different sizes. The image below shows 3 examples with the same logo. Assume, for example, that the logo is 10 pixels to the left edge of the content area (in red). In each of these cases, and for any resolution, I need a JavaScript routine that will calculate that the logo's X coordinate is 10. Again, the challenge (I think) is differing resolutions. In the center-aligned examples, the logo's position, as measured from the left edge of the browser (in black), differs with changing browser size. The left-aligned example should be simple as the logo never moves as the screen resizes. Can anyone think of a way to calculate the scrollable width of a page? In other words, I'm looking for a JavaScript solution to calculate the minimum width of the browser window before a horizontal scroll bar shows up. Thanks for your help!

    Read the article

  • How can I display multiple django modelformset forms together?

    - by JT
    I have a problem with needing to provide multiple model backed forms on the same page. I understand how to do this with single forms, i.e. just create both the forms call them something different then use the appropriate names in the template. Now how exactly do you expand that solution to work with modelformsets? The wrinkle, of course, is that each 'form' must be rendered together in the appropriate fieldset. For example I want my template to produce something like this: <fieldset> <label for="id_base-0-desc">Home Base Description:</label> <input id="id_base-0-desc" type="text" name="base-0-desc" maxlength="100" /> <label for="id_likes-0-icecream">Want ice cream?</label> <input type="checkbox" name="likes-0-icecream" id="id_likes-0-icecream" /> </fieldset> <fieldset> <label for="id_base-1-desc">Home Base Description:</label> <input id="id_base-1-desc" type="text" name="base-1-desc" maxlength="100" /> <label for="id_likes-1-icecream">Want ice cream?</label> <input type="checkbox" name="likes-1-icecream" id="id_likes-1-icecream" /> </fieldset> I am using a loop like this to process the results for base_form, likes_form in map(None, base_forms, likes_forms): which works as I'd expect (I'm using map because the # of forms can be different). The problem is that I can't figure out a way to do the same thing with the templating engine. The system does work if I layout all the base models together then all the likes models after wards, but it doesn't meet the layout requirements.

    Read the article

  • ClassLoader exceptions being memoized

    - by Jim
    Hello, I am writing a classloader for a long-running server instance. If a user has not yet uploaded a class definition, I through a ClassNotFoundException; seems reasonable. The problem is this: There are three classes (C1, C2, and C3). C1 depends on C2, C2 depends on C3. C1 and C2 are resolvable, C3 isn't (yet). C1 is loaded. C1 subsequently performs an action that requires C2, so C2 is loaded. C2 subsequently performs an action that requires C3, so the classloader attempts to load C3, but can't resolve it, and an exception is thrown. Now C3 is added to the classpath, and the process is restarted (starting from the originally-loaded C1). The issue is, C2 seems to remember that C3 couldn't be loaded, and doesn't bother asking the classloader to find the class... it just re-throws the memoized exception. Clearly I can't reload C1 or C2 because other classes may have linked to them (as C1 has already linked to C2). I tried throwing different types of errors, hoping the class might not memoize them. Unfortunately, no such luck. Is there a way to prevent the loaded class from binding to the exception? That is, I want the classloader to be allowed to keep trying if it didn't succeed the first time. Thanks!

    Read the article

  • Rapid calls to fread crashes the application

    - by Slynk
    I'm writing a function to load a wave file and, in the process, split the data into 2 separate buffers if it's stereo. The program gets to i = 18 and crashes during the left channel fread pass. (You can ignore the couts, they are just there for debugging.) Maybe I should load the file in one pass and use memmove to fill the buffers? if(params.channels == 2){ params.leftChannelData = new unsigned char[params.dataSize/2]; params.rightChannelData = new unsigned char[params.dataSize/2]; bool isLeft = true; int offset = 0; const int stride = sizeof(BYTE) * (params.bitsPerSample/8); for(int i = 0; i < params.dataSize; i += stride) { std::cout << "i = " << i << " "; if(isLeft){ std::cout << "Before Left Channel, "; fread(params.leftChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Left Channel, "; } else{ std::cout << "Before Right Channel, "; fread(params.rightChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Right Channel, "; offset += stride; std::cout << "After offset incr.\n"; } isLeft != isLeft; } } else { params.leftChannelData = new unsigned char[params.dataSize]; fread(params.leftChannelData, sizeof(BYTE), params.dataSize, file); }

    Read the article

  • CSS Background image in Redmine template arbitrarily not loading

    - by Pekka
    I`m in the process of building a template for Redmine (a project management system based on Ruby on Rails.) Ruby is running on a virtual server from a Bitnami.org installation package. The OS is Windows. The template essentially consists of a styles.css file. In that file, I have the following line: #header { padding: 0px; padding-top: 48px; background-color: #62DFFF; background-image: url(../images/bkg.jpg) background-position: center bottom; background-repeat: repeat-x; height:150px; } It's a header element with a background image. The problem: This background image arbitrarily appears and disappears when reloading. Say you reload ten times in twenty seconds; the image will appear in two instances, and be missing in the 18 others. I would have put this down to server problems, but the weird thing is that when it's missing, the request for the image doesn't appear in Firebug's net tab at all. Even if it were cached, the request should be there. Raw screenshots of the identical page on two reloads: I am 100% sure the CSS file does not change in between. I have examined both instances with Firebug and the CSS is identical. It happens in both Firefox and Chrome so it must be something basic I'm overlooking. What could be causing a browser not to load a resource at all? I have zero idea about Ruby nor Rails - getting Redmine running and customized is all I have ever had to do with this platform - so I don't really know where to look. Apache's, Mongrel's and Redmine's error logs look fine, though.

    Read the article

  • MSI Installer start auto-repair when service starts

    - by Josh Clark
    I have a WiX based MSI that installs a service and some shortcuts (and lots of other files that don't). The shortcut is created as described in the WiX docs with a registry key under HKCU as the key file. This is an all users install, but to get past ICE38, this registry key has to be under the current user. When the service starts (it runs under the SYSTEM account) it notices that that registry key isn't valid (at least of that user) and runs the install again to "repair". In the Event Log I get MsiInstaller Events 1001 and 1004 showing that "The resource 'HKEY_CURRENT_USER\SOFTWARE\MyInstaller\Foo' does not exist." This isn't surprising since the SYSTEM user wouldn't have this key. I turned on system wide MSI logging and the auto-repair created its log file in the C:\Windows\Temp folder rather than a specific user's TEMP folder which seems to imply the current user was SYSTEM (plus the log file shows the "Calling process" to be my service). Is there something I can do to disable the auto-repair functionality? Am I doing something wrong or breaking some MSI rule? Any hints on where to look next?

    Read the article

  • Processing XML form input in ASP

    - by Omar Kooheji
    I'm maintaining a legacy application which consists of some ASP.Net pages with c# code behinds and some asp pages. I need to change the way the application accepts it's input from reading a set of parameters from some form fields to reading in one form field which contains contains some XML and parsing to get the parameters out. I've written a C# class that takes an The NameValueCollection from the C# HttpRequest's Form Element. Like so NameValueCollection form = Request.Form; Dictionary<string, string> fieldDictionary = RequestDataExtractor.BuildFieldDictionary(form); The code in the class looks for a particular parameter and if it's there processes the XML and outputs a Dictionary, if its not there it just cycles through the Form parameters and puts them all into the dictionary (Allowing the old method to still work) How would I do this in ASP? Can I use my same class, or a modified version of it? or do I have to write some new code to get this working? If I have to write ASP code Whats the best way to process the XML in ASP? Sorry if this seems like a stupid question but I know next to nothing about ASP and VB.

    Read the article

  • Ant fails without message at javac

    - by digitala
    I've written an Ant build.xml file which obtains a number of source files via WSDL and compiles them. These have been working on an old, now destroyed (and therefore unavailable for comparison), system but the build process isn't completing on this newer, faster system. The relevant section of the build file looks like this: <target name="compile" depends="init"> <java classname="org.apache.axis.wsdl.WSDL2Java"> <arg line="--all --server-side --skeletonDeploy --factory --wrapArrays --output src ${srcurl}" /> </java> <javac srcdir="${src}" destdir="${build}" verbose="yes" /> </target> The files are downloaded via the WSDL service successfully, however after that point Ant simply stops & returns to the commandline. Versions of the relevant apps: # java -version java version "1.6.0_14" Java(TM) SE Runtime Environment (build 1.6.0_14-b08) Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode) # javac -version javac 1.6.0_14 # ant -version Apache Ant version 1.6.5 compiled on January 6 2007 I'm assuming that there's a problem with javac that Ant isn't passing back. Is there any way I can get some debugging information from javac? I've tried adding a <record /> tag to the target but that doesn't give any more information than running ant -v does. Any other suggestions would be great, also!

    Read the article

< Previous Page | 767 768 769 770 771 772 773 774 775 776 777 778  | Next Page >