Search Results

Search found 8692 results on 348 pages for 'per magnusson'.

Page 65/348 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Maximum Possible File Name Length in Windows Kernel

    - by Lambert
    I was wondering, what is the longest possible name length allowed by the Windows kernel? E.g.: I know the kernel uses UNICODE_STRING structures to hold all object paths, and since the byte length of a wide-character string is stored inside a USHORT, that allows for a maximum path length of 2^15 - 1 characters. Is there a similar, hard restriction on a file name (rather than path)? (I don't care if NTFS or FAT32 imposes a particular restriction; I'm looking for the longest possible theoretically allowed name in the kernel, assuming no additional file system or shell restrictions.) (Edit: For those wondering why this even matters, consider that normally, traversing a directory is achieved by FindFirstFile/FindNextFile calls, one call per file. Given the function named NtQueryDirectoryFile, which is the underlying system call and which returns multiple file names per call, it's actually possible to take advantage of this maximum-length restriction on the path to make an extremely-fast directory traverser that uses solely the stack as a buffer. Now I'm trying to extend that concept, and I need to know the maximum size of a file name.)

    Read the article

  • Autofac: Reference from a SingleInstance'd type to a HttpRequestScoped

    - by Michael Wagner
    I've got an application where a shared object needs a reference to a per-request object. Shared: Engine | Per Req: IExtensions() | Request If i try to inject the IExtensions directly into the constructor of Engine, even as Lazy(Of IExtension), I get a "No scope matching [Request] is visible from the scope in which the instance was requested." exception when it tries to instantiate each IExtension. How can I create a HttpRequestScoped instance and then inject it into a shared instance? Would it be considered good practice to set it in the Request's factory (and therefore inject Engine into RequestFactory)?

    Read the article

  • Odd optimization problem under MSVC

    - by Goz
    I've seen this blog: http://igoro.com/archive/gallery-of-processor-cache-effects/ The "weirdness" in part 7 is what caught my interest. My first thought was "Thats just C# being weird". Its not I wrote the following C++ code. volatile int* p = (volatile int*)_aligned_malloc( sizeof( int ) * 8, 64 ); memset( (void*)p, 0, sizeof( int ) * 8 ); double dStart = t.GetTime(); for (int i = 0; i < 200000000; i++) { //p[0]++;p[1]++;p[2]++;p[3]++; // Option 1 //p[0]++;p[2]++;p[4]++;p[6]++; // Option 2 p[0]++;p[2]++; // Option 3 } double dTime = t.GetTime() - dStart; The timing I get on my 2.4 Ghz Core 2 Quad go as follows: Option 1 = ~8 cycles per loop. Option 2 = ~4 cycles per loop. Option 3 = ~6 cycles per loop. Now This is confusing. My reasoning behind the difference comes down to the cache write latency (3 cycles) on my chip and an assumption that the cache has a 128-bit write port (This is pure guess work on my part). On that basis in Option 1: It will increment p[0] (1 cycle) then increment p[2] (1 cycle) then it has to wait 1 cycle (for cache) then p[1] (1 cycle) then wait 1 cycle (for cache) then p[3] (1 cycle). Finally 2 cycles for increment and jump (Though its usually implemented as decrement and jump). This gives a total of 8 cycles. In Option 2: It can increment p[0] and p[4] in one cycle then increment p[2] and p[6] in another cycle. Then 2 cycles for subtract and jump. No waits needed on cache. Total 4 cycles. In option 3: It can increment p[0] then has to wait 2 cycles then increment p[2] then subtract and jump. The problem is if you set case 3 to increment p[0] and p[4] it STILL takes 6 cycles (which kinda blows my 128-bit read/write port out of the water). So ... can anyone tell me what the hell is going on here? Why DOES case 3 take longer? Also I'd love to know what I've got wrong in my thinking above, as i obviously have something wrong! Any ideas would be much appreciated! :) It'd also be interesting to see how GCC or any other compiler copes with it as well! Edit: Jerry Coffin's idea gave me some thoughts. I've done some more tests (on a different machine so forgive the change in timings) with and without nops and with different counts of nops case 2 - 0.46 00401ABD jne (401AB0h) 0 nops - 0.68 00401AB7 jne (401AB0h) 1 nop - 0.61 00401AB8 jne (401AB0h) 2 nops - 0.636 00401AB9 jne (401AB0h) 3 nops - 0.632 00401ABA jne (401AB0h) 4 nops - 0.66 00401ABB jne (401AB0h) 5 nops - 0.52 00401ABC jne (401AB0h) 6 nops - 0.46 00401ABD jne (401AB0h) 7 nops - 0.46 00401ABE jne (401AB0h) 8 nops - 0.46 00401ABF jne (401AB0h) 9 nops - 0.55 00401AC0 jne (401AB0h) I've included the jump statetements so you can see that the source and destination are in one cache line. You can also see that we start to get a difference when we are 13 bytes or more apart. Until we hit 16 ... then it all goes wrong. So Jerry isn't right (though his suggestion DOES help a bit), however something IS going on. I'm more and more intrigued to try and figure out what it is now. It does appear to be more some sort of memory alignment oddity rather than some sort of instruction throughput oddity. Anyone want to explain this for an inquisitive mind? :D Edit 3: Interjay has a point on the unrolling that blows the previous edit out of the water. With an unrolled loop the performance does not improve. You need to add a nop in to make the gap between jump source and destination the same as for my good nop count above. Performance still sucks. Its interesting that I need 6 nops to improve performance though. I wonder how many nops the processor can issue per cycle? If its 3 then that account for the cache write latency ... But, if thats it, why is the latency occurring? Curiouser and curiouser ...

    Read the article

  • CouchDB Versioning / Auditing

    - by Cory
    I'm attempting to use CouchDB for a system that requires full auditing of all data operations. Because of its built in revision-tracking, couch seemed like an ideal choice. But then I read in the O'Reilly textbook that "CouchDB does not guarantee that older versions are kept around." I can't seem to find much more documentation on this point, or how couch deals with its revision-tracking internally. Is there any way to configure couch either on a per-database, or per-document level to keep all versions around forever? If so, how?

    Read the article

  • Change Gmail message routing on individual mailboxes

    - by citadelgrad
    We are using dual delivery for one of our Google Apps doamins and need to be able to disable mail delivery to the Gmail account. You can manually update the settings on a per user basis through the Admin interface by unchecking the box next to "Google Apps Email" in the Email routing section. From the Google Apps API documentation for the python library it does not appear that I programmatically disable the email routing for "Google Apps Email" on a per user basis. Does anyone know if it's possible? The only routing related method I can find is at the Domain level and not the user level. gdata.apps.adminsettings.service Thank you!

    Read the article

  • Objective-c class initializer executes multiple times

    - by apisip
    Unless I misunderstood it, this LINK from Apple's documentation clearly states that the class initializer, "+ (void)initialize)", only executes once per class. Here is the excerpt: Special Considerations initialize it is invoked only once per class. If you want to perform independent initialization for the class and for categories of the class, you should implement load methods. However, I'm getting a weird behavior on my project and the initializer is being executed twice. So I have to check if _classContext is null. I only got one class that has this method. What are the possible reasons why this is happening? I'm using XCode 4.5.2 and OS X 10.8.2. I got multiple iOS simulators, iPhone 5.1 and 6.0. + (void) initialize { num++; NSLog([NSString stringWithFormat:@"Times: %i", num]); if(_classContext == nil) _classContext = [[myClass alloc] init]; }

    Read the article

  • environment configuration for tests running in NUnit

    - by Frank Schwieterman
    I have some integration tests that hit a webserver and verify certain functionalities. Depending on the build environment, the server will be at a different address (http://localhost:8080/, http://test-vm/, etc). I would like to run these tests from a TFS build. I'm wondering whats the appropriate way to configure these tests? Would I just add a setting to the config file? I'm doing that currently. Incidentally we do have a separate branch per test environment, so I could have a different config file checked in for each environment. I wonder if there is a better way though? I'd like the build project to be able to tell the test what server to test. This seems better because then I don't have to maintain config information on a per branch basis. I believe I'd be using NUnit for Team Build (http://nunit4teambuild.codeplex.com/) to get NUnit/TFS to play together.

    Read the article

  • Large file download for a Rails project

    - by Horace Ho
    One client project will be online two months later. One of the requirements changed is to support large files (10 to 15MB per RAW camera file, expected 1000 to 5000 files download per day) download worldwide for their customers. The process will be: there is upload screen via paperclip to the rails local public folder a hourly task to upload to web storage (S3?) update the download url from paperclip url to the web url Questions: is there a gem/plug-in for this purpose? if no, any gem/plug-in for S3 to recommend? Questions about the storage provider: is S3 recommended? or other service to recommend? The baseline is: the client's web server does not and will not have the bandwidth to handle the downloads. Thanks

    Read the article

  • Is re-using a Command and Connection object in ado.net a legitimate way of reducing new object creat

    - by Neil Trodden
    The current way our application is written, involves creating a new connection and command object in every method that access our sqlite db. Considering we need it to run on a WM5 device, that is leading to hideous performance. Our plan is to use just one connection object per-thread but it's also occurred to us to use one global command object per-thread too. The benefit of this is it reduces the overhead on the garbage collector created by instantiating objects all over the place. I can't find any advice against doing this but wondered if anyone can answer definitively if this is a good or bad thing to do, and why?

    Read the article

  • iPhone - how to be notified of call completion

    - by ecodan
    Hi- I'm developing an application that needs to take action on completed phone calls, preferably right after the call ends but minimally once per day. I've read up on the new CoreTelphony framework, and it seems I can get call events if my app is active, but I don't see how to launch/wake my app when a call ends if my app is not the foreground app. I also don't see how any of the new pseudo-background "modes" would allow my app to listen for these events in the background. Do any of you know how this might be done? If post-call processing isn't possible, then I'd like to figure out a way to automatically wake my app up once per day, pull all of the call events since the last wakeup, and process them. I know how I might do this with Push or Local notifications, but my understanding is that those require user action to continue; in this case, I just want the processing to happen automatically. Is there a mechanism that would enable this? Thanks, Dan

    Read the article

  • Download data in background with iOS4

    - by Sagar
    As per the latest update of Kindle V2.5, it has support of "continue downloading books while the app is in the background on iOS 4 devices". How is it possible to download content in background? As per the iOS multitasking documentation, only audio, voip & location updates are possible in background. And I've also maken sure that NSURLConnection doesn't download new data work while app goes background. Then how's it possible with Kindle app? Edit: I haven't checked Kindle App in iOS4 multitasking enabled device. So if anyone let me (& community) know what exactly Kindle app does to download, that would be very much helpful.

    Read the article

  • [C++] Is it possible to use threads to speed up file reading ?

    - by Mister Mystère
    Hi there, I want to read a file as fast as possible (40k lines) [Edit : the rest is obsolete]. Edit: Andres Jaan Tack suggested a solution based on one thread per file, and I want to be sure I got this (thus this is the fastest way) : One thread per entry file reads it whole and stocks its content in a container associated (- as many containers as there are entry files) One thread calculates the linear combination of every cell read by the input threads, and stocks the results in the exit container (associated to the output file). One thread writes by block (every 4kB of data, so about 10 lines) the content of the output container. Should I deduce that I must not use m-mapped files (because the program's on standby waiting for the data) ? Thanks aforehand. Sincerely, Mister mystère.

    Read the article

  • How to tell which thread(s) are producing all the garbage?

    - by Brad Hein
    I have an app with about 15 threads. Most do mundane tasks and sleep most of their lives. Others collect information and cache it in hashmaps. The hashmaps grow to a moderate size and level out. The number of keys and size of value remains constant, but the contents of the values changes (at 33 keys per second average). When I start my app, I notice the garbage collection interval goes from minutes to once per second, and the amount of garbage is 700k+ each time. In fact as I was writing this, it caused my phone to reboot with an error "Referencetable Overflow". Here's my question: Are there any tricks to identifying which threads are producing the garbage, or even finding out more about what garbage they are producing?

    Read the article

  • Javascript self contained sandbox events and client side stack

    - by amnon
    I'm in the process of moving a JSF heavy web application to a REST and mainly JS module application . I've watched "scalable javascript application architecture" by Nicholas Zakas on yui theater (excellent video) and implemented much of the talk with good success but i have some questions : I found the lecture a little confusing in regards to the relationship between modules and sandboxes , on one had to my understanding modules should not be effected by something happening outside of their sandbox and this is why they publish events via the sandbox (and not via the core as they do access the core for hiding base libary) but each module in the application gets a new sandbox ? , shouldn't the sandbox limit events to the modoules using it ? or should events be published cross page ? e.g. : if i have two editable tables but i want to contain each one in a different sandbox and it's events effect only the modules inside that sandbox something like messabe box per table which is a different module/widget how can i do that with sandbox per module , ofcourse i can prefix the events with the moduleid but that creates coupling that i want to avoid ... and i don't want to package modules toghter as one module per combination as i already have 6-7 modules ? while i can hide the base library for small things like id selector etc.. i would still like to use the base library for module dependencies and resource loading and use something like yui loader or dojo.require so in fact i'm hiding the base library but the modules themself are defined and loaded by the base library ... seems a little strange to me libraries don't return simple js objects but usualy wrap them e.g. : u can do something like $$('.classname').each(.. which cleans the code alot , it makes no sense to wrap the base and then in the module create a dependency for the base library by executing .each but not using those features makes a lot of code written which can be left out ... and implemnting that functionality is very bug prone does anyonen have any experience with building a front side stack of this order ? how easy is it to change a base library and/or have modules from different libraries , using yui datatable but doing form validation with dojo ... ? some what of a combination of 2+4 if u choose to do something like i said and load dojo form validation widgets for inputs via yui loader would that mean dojocore is a module and the form module is dependant on it ? Thanks .

    Read the article

  • Reducing a normalized table to one value

    - by Dio
    Hello, I'm sure this has been asked but I'm not quite sure how to properly search for this question, my apologies. I have two tables, Foo and Bar. For has one row per Food, bar has many rows per food matching descriptors. Foo name id Apple 1 Orange 2 Bar id description 1 Tasty 1 Ripe 2 Sweet etc (sorry for the somewhat contrived example). I'm trying to return a query where if, for each row in Foo, Bar contains a descriptor in ('Tasty', 'Juicy') return true ex: Output Apple True Orange False I had been solving this somewhat trivially with a case when I only had one item to match select Foo.name, case bar.description when 'Tasty' then True else 'False' end from Foo left join Bar on foo.id = bar.id where bar.description = 'Tasty' But with multiple items, I keep ending up with extra rows: Output Apple True Apple False etc etc Can someone point me in the right direction on how to think about this problem or what I should be doing? Thank you.

    Read the article

  • Should I aim for fewer HTTP requests or more cacheable CSS files?

    - by Jonathan Hanson
    We're being told that fewer HTTP requests per page load is a Good Thing. The extreme form of that for CSS would be to have a single, unique CSS file per page, with any shared site-wide styles duplicated in each file. But there's a trade off there. If you have separate shared global CSS files, they can be cached once when the front page is loaded and then re-used on multiple pages, thereby reducing the necessary size of the page-specific CSS files. So which is better in real-world practice? Shorter CSS files through multiple discrete CSS files that are cacheable, or fewer HTTP requests through fewer-but-larger CSS files?

    Read the article

  • Optimizing a fluid grid layout

    - by user1815176
    I recently just added a grid layout, but I can't figure out how to make my links work. The grid that I used is the 1140 one at http://cssgrid.net/. I studied the source code of that website, and tried to make my page like theirs, but when I put everything in it made mine worse, and the grid didn't even work. This is how my website is supposed to look http://spencedesign.netau.net/singaporehome.html and this is how it does http://spencedesign.netau.net/home.html And when you reduce the size, it doesn't look like it's supposed to. When you minimize it I want the pictures(links) to be two per row, then one per row depending on how small the page is. I also want the quote to turn into different rows when it is too small for it. But I can't figure out how to make the page look normal regularly let alone make it look good with a smaller browser. Thanks!

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • Tool for response time analysis on JBoss server?

    - by Ariel Vardi
    I am running a pretty high traffic cluster of JBoss servers serving REST requests and I am interested in tools reading the access logs in Tomcat format (with %D parameter) to provide a detailed analysis of the response time on a per-call basis. Ideally this tool would generate a chart showing the progression of the response time throughout the day, hour per hour, then a weekly view with averages on the day, and monthly with average on the weeks (CACTI style). I've looked for such tools and couldn't find anything. Is any of you guys aware of something close to that before I start writing my own? I haven't looked into CACTI extensions yet, but that be an option?

    Read the article

  • MFC/WIN32: mouse hover highlight in listctrl

    - by Mordachai
    The ListView control of Windows Explorer gives a highlight to whatever item is under the mouse, without affecting the current selection. This helps enormously with relating what item a given tooltip applies to within a listview - especially in report mode. However, I am currently unable to find any APIs that would give my MFC application's CListCtrl that same behavior. Extended styles only have LVS_EX_TRACKSELECT, which actually alters the current selection (yuck!). Does anyone know how to provide a standard CListCtrl (or whatever that actually sits on top of) the mouse-hot-tracking capability? I found some articles on how to provide per cell and per row tooltip text, but its hard to tell what the tooltips relate to without something highlighting...

    Read the article

  • jQuery Validation Plugin: Packer undefined error?

    - by Rosarch
    I'm using the jQuery validation plugin from bassistance.de. It works fine. From <head>: <script type="text/javascript" src="/static/JQuery.js"></script> <script type="text/javascript" src="/static/js-lib/jquery.validate.pack.js"></script> <script type="text/javascript" src="/static/js-lib/jquery.validate.additional-methods.js"></script> At first, this was the only validation code I had, and it worked: $("form").validate(); $("#form-username").rules("add", { required: true, email: true, }); It was validating this HTML: <form id="form-username-form" action="api/user_of_email" method="get"> <p> <label for="form-username">Email:</label> <input type="text" name="email" id="form-username" /> <input type="submit" value="Submit" id="form-submit" /> </p> </form> Great, everything works. But then I add this JS: $("#form-choose-options input[type='text']").rules("add", { number: true, }); to validate this markup: <form id="form-choose-options" action="api/set_options" method="get"> <p> <label for="form-min-credits">Min credits per term:</label><input type="text" name="min_credits" id="form-min-credits" /> <br /> <label for="form-optimal-credits">Optimal credits per term:</label><input type="text" name="optimal_credits" id="form-optimal-credits" /> <br /> <label for="form-max-credits">Max credits per term:</label><input type="text" name="max_credits" id="form-max-credits" /> <br /> <label for="form-low-GPA">Lowest acceptable GPA:</label><input type="text" name="low_GPA" id="form-low-GPA" /> <br /> <label for="form-high-GPA">Highest realistic GPA:</label><input type="text" name="high_GPA" id="form-high-GPA" /> <br /> <input type="hidden" class="user-pk" name="pk"/> <input type="submit" value="Submit" /> </p> </form> This causes a javascript error on document load: $.data(f.form, "validator") is undefined The error is from the packer function. What am I doing wrong?

    Read the article

  • How can we block the user from unchecking a DataGridView checkbox?

    - by hawbsl
    We have a DataGridViewCheckBox column bound to a boolean property in our class. The property setter has some logic which says that under certain conditions a True flag cannot be changed, ie, it stays checked forever. This is on a per record basis. So the entire column can't be readonly, only certain rows. Pseudo code: Public Property Foo() As Boolean Get Return _Foo End Get Set(ByVal value As Boolean) If _Foo And Bar And value = False Then //do nothing, in this scenario once you're true, you stay true Else _Foo = value End If End Set End Property Databinding is handling all of this fine, except that the checkbox is visibly cleared when it's clicked. Then, of course, when the binding / setter is fired (as you move off that cell) it is restored to its checked status per the underlying logic. Ultimately it doesn't matter too much but it's a clumsy bit of UI. How can we intercept the user's click and keep it checked?

    Read the article

  • UPDATE from SELECT complains about more that one value returned

    - by Álvaro G. Vicario
    I have this data structure: request ======= building_id lot_code building ======== building_id lot_id lot === lot_id lot_code The request table is missing the value for the building_id column and I want to fill it in from the other tables. So I've tried this: UPDATE request SET building_id = ( SELECT bu.building_id FROM building bu INNER JOIN lot lo ON bu.lot_id=lo.lot_id WHERE lo.lot_code = request.lot_code ); But I'm getting this error: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , , = or when the subquery is used as an expression. Is it due to wrong syntax? The data model allows more than one building per lot but actual data doesn't contain such cases so there should be at most one building_id per lot_code.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >