Search Results

Search found 35433 results on 1418 pages for 'document based'.

Page 31/1418 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Push-Based Events in a Services Oriented Architecture

    - by Colin Morelli
    I have come to a point, in building a services oriented architecture (on top of Thrift), that I need to expose events and allow listeners. My initial thought was, "create an EventService" to handle publishing and subscribing to events. That EventService can use whatever implementation it desires to actually distribute the events. My client automatically round-robins service requests to available service hosts which are determined using Zookeeper-based service discovery. So, I'd probably use JMS inside of EventService mainly for the purpose of persisting messages (in the event that a service host for EventService goes down before it can distribute the message to all of the available listeners). When I started considering this, I began looking into the differences between Queues and Topics. Topics unfortunately won't work for me, because (at least for now), all listeners must receive the message (even if they were down at the time the event was pushed, or hadn't made a subscription yet because they haven't completed startup (during deployment, for example) - messages should be queued until the service is available). However, I don't want EventService to be responsible for handling all of the events. I don't think it should have the code to react to events inside of it. Each of the services should do what it needs with a given event. This would indicate that each service would need a JMS connection, which questions the value of having EventService at all (as the services could individually publish and subscribe to JMS directly). However, it also couples all of the services to JMS (when I'd rather that there be a single service that's responsible for determining how to distribute events). What I had thought was to publish an event to EventService, which pulls a configuration of listeners from some configuration source (database, flat file, irrelevant for now). It replicates the message and pushes each one back into a queue with information specific to that listener (so, if there are 3 listeners, 1 event would become 3 events in JMS). Then, another thread in EventService (which is replicated, running on multiple hots) would be pulling from the queue, attempting to make the service call to the "listener", and returning the message to the queue (if the service is down), or discarding the message (if the listener completed successfully). tl;dr If I have an EventService that is responsible for receiving events and delegating service calls to "event listeners," (which are really just endpoints on other services), how should it know how to craft the service call? Should I create a generic "Event" object that is shared among all services? Then, the EventService can just construct this object and pass it to the service call. Or is there a better answer to this problem entirely?

    Read the article

  • Booting From VIA VT6421A Based PCI SATA Card

    - by Priyan R
    I bought a VIA VT6421A based SATA card for an 845 chipset based motherboard. The card is working i can access sata hdd from windows/linux. The problem is i can't directly boot from the sata card. My motherboard is award bios 6 based. I tried first boot as SCSI, its not worked. There is no Raid Bios screen appeared from the card. On searching i found , Add VT6421A bios to system bios as pci addon bios. I did it using cbrom6 , successfully added VT6421A bios to the existing bios. But now on booting instead of raid bios system bios showed Warning : something like cannot load add on rom for vendor id xxxx device id xxx . Whats wrong ? As the card is VT6421A based and i added VT6421A bios got from VIA website.

    Read the article

  • Opensource Dns based incoming Load balancing ?

    - by sahil
    Hi, I am using Linkproof device for incoming load balancing based on which is based on dns. For example my linkprrof have 3 isp link.... it also act as a dns sevrer for my sites...so when someone connect me by using my dns name my linkprrof give them ip from 3 isp which ever is responding fast. so is there any same kind of open source solution for dns based incoming load balance is possible ??? sahil

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • Getting Safari document title/location with Scripting Bridge does not work in full-screen mode

    - by Mark
    I'm trying to get the URL and document title from the topmost Safari document/tab. I have an AppleScript and an objective-c version using Apple's Scripting Bridge framework. Both versions work fine for most web pages, however when I open a Youtube video in full-screen mode, the Scripting Bridge based version fails. The Apple Script works fine for "normal" and full-screen Safari windows. Can anyone see what is wrong with the Scripting Bridge code below to cause it to fail for full-screen Safari windows? Here the code (I omitted error checking for brevity): AppleScript: tell application "Safari" # Give us some time to open video in full-screen mode delay 10 do JavaScript "document.title" in document 0 end tell Scripting Bridge: SafariApplication* safari = [SBApplication applicationWithBundleIdentifier:@"com.apple.Safari"]; SBElementArray* windows = [safari windows]; SafariTab* currentTab = [[windows objectAtIndex: 0] currentTab]; // This fails when in full-screen mode: id result = [safari doJavaScript: @"document.title" in: currentTab]; NSLog(@"title: %@", result); Scripting Bridge error (with added line breaks): Apple event returned an error. Event = 'sfri'\'dojs'{ '----':'utxt'("document.title"), 'dcnm':'obj '{ 'want':'prop', 'from':'obj '{ 'want':'cwin', 'from':'null'(), 'form':'indx', 'seld':1 }, 'form':'prop', 'seld':'cTab' } } Error info = { ErrorNumber = -1728; ErrorOffendingObject = <SBObject @0x175c2de0: currentTab of SafariWindow 0 of application "Safari" (238)>; } I could not find details about the given error code. It complains about 'currentTab' which shows that the JavaScript event at least made it all the way to Safari. I assume that the current tab receives the event, but refuses to run the JS code, because it is in full-screen mode. However, why does this work for an AppleScript? Don't they use the same code path eventually? Any suggestions are greatly appreciated. Thanks!

    Read the article

  • Having trouble mapping Sharepoint document library as a Network Place

    - by Sdmfj
    I am using Office 365, Sharepoint Online 2013. Using Internet Explorer these are the steps I have taken: ticked the keep me signed in on the portal.microsoftonline.com page. It redirects me to Godaddy login page because Office 365 was purchased through them. I have added these sites to trusted sites (as well as every page in the process) and chose auto logon in Internet explorer. Once on the document library I open as explorer and copy the address as text. I go to My Computer and right click to add a network place and paste in the document library address. It successfully adds the library as a network place 30% of the time. I can do this same process 3 times in a row and it will fail the first 2 times and then succeeds. It works for a little while and then I get an error that the DNS cannot be found. I need multiple users in our organization to be able to access this document library as if it was a mapped network drive on our local network. Is there an easier way to do this? I may just sync using the One Drive app but thought that direct access to the files without worrying about users keeping their files synced.

    Read the article

  • Why does a document in Word 2007 stop recognizing the mouse after the document loses focus?

    - by alt234
    When I open a document in Word 2007, everything works fine, I can edit, highlight text, etc. However, the instant Word loses focus, when I focus back the document doesn't recognize anything the mouse does. The tabbed menu at the top seems to recognize the mouse but the document itself does not. I can scroll through via the scroll-wheel and I can type. However, typing just shows up where the mouse cursor last was before focus was taken away. I've tried clearing some word data registry keys. I've also found that some Word Add-ins can cause problems. LaserFiche is one I see mentioned a lot. As far as I can tell I have no add-ons though. Any ideas? It's crazy-annoying. UPDATE- - Word is the only program that has this problem - Typically I have Toad (Oracle DB management app), an XP virtual machine with various apps running on it, Skype, Google Talk, and maybe a handful of other programs at any given time open... Windows Media player, Outlook. - Yes, this happens even if nothing else is running. From a fresh restart as well. - I'm running Vista 64 with SP1 - According to Windows Update, I have the latest of everything. This has been happening for a couple of months now. Just never took the time to look into because I usually never have to use word.

    Read the article

  • get value from css using document.getElementById().style.height javascript

    - by Jamex
    Hi, Please offer insight into this mystery. I am trying to get the height value from a div box by var high = document.getElementById("hintdiv").style.height; alert(high); I can get this value just fine if the attribute is contained within the div tag, but it returns a blank value if the attribute is defined in the css section. This is fine, it shows 100px as a value. The value can be accessed. <div id="hintdiv" style="height:100px; display: none;"> . . var high = document.getElementById("hintdiv").style.height; alert(high); This is not fine, it shows an empty alert screen. The value is practically 0. #hintdiv { height:100px display: none; } <div id="hintdiv"> . . var high = document.getElementById("hintdiv").style.height; alert(high); But I have no problem accessing/changing the "display:none" attribute whether it is in the tag or in the css section. The div box displays correctly by both attribute definition methods (inside the tag or in the css section). I also tried to access the value by other variations, but no luck document.getElementById("hintdiv").style.height.value ----> undefined document.getElementById("hintdiv").height ---->undefined document.getElementById("hintdiv").height.value ----> error, no execution Any solution? TIA.

    Read the article

  • document.write Not working when loading external Javascript source

    - by jadent
    I'm trying to load an external JavaScript file dynamically into an HTML element to preview an ad tag. The script loads and executes but the script contains "document.write" which has an issue executing properly but there are no errors. <html> <head> <script src="//ajax.googleapis.com/ajax/libs/jquery/2.0.3/jquery.min.js"></script> <script type="text/javascript"> $(function() { source = 'http://ib.adnxs.com/ttj?id=555281'; // DOM Insert Approach // ----------------------------------- var script = document.createElement('script'); script.setAttribute('type', 'text/javascript'); script.setAttribute('src', source); document.body.appendChild(script); }); </script> </head> <body> </body> </html> I can get it to work if If i move the the source to the same domain for testing If the script was modified to use document.createElement and appendChild instead of document.write like the code above. I don't have the ability to modify the script since it being generated and hosted by a 3rd party. Does anyone know why the document.write will not work correctly? And is there a way to get around this? Thanks for the help!

    Read the article

  • "Document in ADF" on Canon MX340

    - by Michael Donohue
    I have a Canon MX340 multifunction printer. Recently, it keeps saying "Document in ADF" when I turn on the printer. I've tried clearing this multiple times, but as soon as the feeder wheels stop turning, in an attempt to clear the document feeder, it just pops up the same error again. This is particularly annoying, as it blocks all functions on the device - I cannot print, even though printing has no interaction with the document feeder. I've opened up the feeder device, as much as can be done with fingers alone. There just doesn't seem to be anything in there. I ran a sheet of paper through about six times, just to see if some dust might be getting in the way, and I've blown out the feeder with air. Still nothing. At this point, I don't care too much about the ADF working, I just want to disable whatever sensor is tripping this error message. Any ideas? I found this thread online, where a user has the same problem. But no resolution was reached there.

    Read the article

  • Web-based IMAP client with support for multiple mailboxes

    - by Nils
    I would like to switch from desktop-based e-mail software (Thunderbird) to a web-based solution that I run on my own web server. I have already tried out Roundcube and while it does work reasonably well so far there is one great feature from Thunderbird that seems to be missing - it doesn't allow me to have a unified mailbox for multiple IMAP accounts. Can anybody recommend a web-based IMAP client that has this feature?

    Read the article

  • nginx won't serve an error_page in a subdirectory of the document root

    - by Brandan
    (Cross-posted from Stack Overflow; could possibly be migrated from there.) Here's a snippet of my nginx configuration: server { error_page 500 /errors/500.html; } When I cause a 500 in my application, Chrome just shows its default 500 page (Firefox and Safari show a blank page) rather than my custom error page. I know the file exists because I can visit http://server/errors/500.html and I see the page. I can also move the file to the document root and change the configuration to this: server { error_page 500 /500.html; } and nginx serves the page correctly, so it's doesn't seem like it's something else misconfigured on the server. I've also tried: server { error_page 500 $document_root/errors/500.html; } and: server { error_page 500 http://$http_host/errors/500.html; } and: server { error_page 500 /500.html; location = /500.html { root /path/to/errors/; } } with no luck. Is this expected behavior? Do error pages have to exist at the document root, or am I missing something obvious? Update 1: This also fails: server { error_page 500 /foo.html; } when foo.html does indeed exist in the document root. It almost seems like something else is overwriting my configuration, but this block is the only place anywhere in /etc/nginx/* that references the error_page directive. Is there any other place that could set nginx configuration?

    Read the article

  • What I don&rsquo;t like about WIF&rsquo;s Claims-based Authorization

    - by Your DisplayName here!
    In my last post I wrote about what I like about WIF’s proposed approach to authorization – I also said that I definitely would build upon that infrastructure for my own systems. But implementing such a system is a little harder as it could be. Here’s why (and that’s purely my perspective): First of all WIF’s authorization comes in two “modes” Per-request authorization. When an ASP.NET/WCF request comes in, the registered authorization manager gets called. For SOAP the SOAP action gets passed in. For HTTP requests (ASP.NET, WCF REST) the URL and verb. Imperative authorization This happens when you explicitly call the claims authorization API from within your code. There you have full control over the values for action and resource. In ASP.NET per-request authorization is optional (depends on if you have added the ClaimsAuthorizationHttpModule). In WCF you always get the per-request checks as soon as you register the authorization manager in configuration. I personally prefer the imperative authorization because first of all I don’t believe in URL based authorization. Especially in the times of MVC and routing tables, URLs can be easily changed – but then you also have to adjust your authorization logic every time. Also – you typically need more knowledge than a simple “if user x is allowed to invoke operation x”. One problem I have is, both the per-request calls as well as the standard WIF imperative authorization APIs wrap actions and resources in the same claim type. This makes it hard to distinguish between the two authorization modes in your authorization manager. But you typically need that feature to structure your authorization policy evaluation in a clean way. The second problem (which is somehow related to the first one) is the standard API for interacting with the claims authorization manager. The API comes as an attribute (ClaimsPrincipalPermissionAttribute) as well as a class to use programmatically (ClaimsPrincipalPermission). Both only allow to pass in simple strings (which results in the wrapping with standard claim types mentioned earlier). Both throw a SecurityException when the check fails. The attribute is a code access permission attribute (like PrincipalPermission). That means it will always be invoked regardless how you call the code. This may be exactly what you want, or not. In a unit testing situation (like an MVC controller) you typically want to test the logic in the function – not the security check. The good news is, the WIF API is flexible enough that you can build your own infrastructure around their core. For my own projects I implemented the following extensions: A way to invoke the registered claims authorization manager with more overloads, e.g. with different claim types or a complete AuthorizationContext. A new CAS attribute (with the same calling semantics as the built-in one) with custom claim types. A MVC authorization attribute with custom claim types. A way to use branching – as opposed to catching a SecurityException. I will post the code for these various extensions here – so stay tuned.

    Read the article

  • Part 15: Fail a build based on the exit code of a console application

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application When you have a Console Application or a batch file that has errors, the exitcode is set to another value then 0. You would expect that the build would see this and report an error. This is not true however. First we setup the scenario. Add a ConsoleApplication project to your solution you are building. In the Main function set the ExitCode to 1     class Program    {        static void Main(string[] args)        {            Console.WriteLine("This is an error in the script.");            Environment.ExitCode = 1;        }    } Checkin the code. You can choose to include this Console Application in the build or you can decide to add the exe to source control Now modify the Build Process Template CustomTemplate.xaml Add an argument ErrornousScript Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add an Sequence activity to the template In the Sequence, add a ConvertWorkspaceItem and an InvokeProcess activity (see Part 14: Execute a PowerShell script  for more detailed steps) In the FileName property of the InvokeProcess use the ErrornousScript so the ConsoleApplication will be called. Modify the build definition and make sure that the ErrornousScript is executing the exe that is setting the ExitCode to 1. You have now setup a build definition that will execute the errornous Console Application. When you run it, you will see that the build succeeds. This is not what you want! To solve this, you can make use of the Result property on the InvokeProcess activity. So lets change our Build Process Template. Add the new variables (scoped to the sequence where you run the Console Application) called ExitCode (type = Int32) and ErrorMessage Click on the InvokeProcess activity and change the Result property to ExitCode In the Handle Standard Output of the InvokeProcess add a Sequence activity In the Sequence activity, add an Assign primitive. Set the following properties: To = ErrorMessage Value = If(Not String.IsNullOrEmpty(ErrorMessage), Environment.NewLine + ErrorMessage, "") + stdOutput And add the default BuildMessage to the sequence that outputs the stdOutput Add beneath the InvokeProcess activity and If activity with the condition ExitCode <> 0 In the Then section add a Throw activity and set the Exception property to New Exception(ErrorMessage) The complete workflow looks now like When you now check in the Build Process Template and run the build, you get the following result And that is exactly what we want.   You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • How do I implement a quaternion based camera?

    - by kudor gyozo
    I looked at several tutorials about this and when I thought I understood I tried to implement a quaternion based camera. The problem is it doesn't work correctly, after rotating for approx. 10 degrees it jumps back to -10 degrees. I have no idea what's wrong. I'm using openTK and it already has a quaternion class. I'm a noob at opengl, I'm doing this just for fun, and don't really understand quaternions, so probably I'm doing something stupid here. Here is some code: (Actually almost all the code except the methods that load and draw a vbo (it is taken from an OpenTK sample that demonstrates vbo-s)) I load a cube into a vbo and initialize the quaternion for the camera protected override void OnLoad(EventArgs e) { base.OnLoad(e); cameraPos = new Vector3(0, 0, 7); cameraRot = Quaternion.FromAxisAngle(new Vector3(0,0,-1), 0); GL.ClearColor(System.Drawing.Color.MidnightBlue); GL.Enable(EnableCap.DepthTest); vbo = LoadVBO(CubeVertices, CubeElements); } I load a perspective projection here. This is loaded at the beginning and every time I resize the window. protected override void OnResize(EventArgs e) { base.OnResize(e); GL.Viewport(0, 0, Width, Height); float aspect_ratio = Width / (float)Height; Matrix4 perpective = Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspect_ratio, 1, 64); GL.MatrixMode(MatrixMode.Projection); GL.LoadMatrix(ref perpective); } Here I get the last rotation value and create a new quaternion that represents only the last rotation and multiply it with the camera quaternion. After this I transform this into axis-angle so that opengl can use it. (This is how I understood it from several online quaternion tutorials) protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); double speed = 1; double rx = 0, ry = 0; if (Keyboard[Key.A]) { ry = -speed * e.Time; } if (Keyboard[Key.D]) { ry = +speed * e.Time; } if (Keyboard[Key.W]) { rx = +speed * e.Time; } if (Keyboard[Key.S]) { rx = -speed * e.Time; } Quaternion tmpQuat = Quaternion.FromAxisAngle(new Vector3(0,1,0), (float)ry); cameraRot = tmpQuat * cameraRot; cameraRot.Normalize(); GL.MatrixMode(MatrixMode.Modelview); GL.LoadIdentity(); Vector3 axis; float angle; cameraRot.ToAxisAngle(out axis, out angle); GL.Rotate(angle, axis); GL.Translate(-cameraPos); Draw(vbo); SwapBuffers(); } Here are 2 images to explain better: I rotate a while and from this: it jumps into this Any help is appreciated. Update1: I add these to a streamwriter that writes into a file: sw.WriteLine("camerarot: X:{0} Y:{1} Z:{2} W:{3} L:{4}", cameraRot.X, cameraRot.Y, cameraRot.Z, cameraRot.W, cameraRot.Length); sw.WriteLine("ry: {0}", ry); The log is available here: http://www.pasteall.org/26133/text. At line 770 the cube jumps from right to left, when camerarot.Y changes signs. I don't know if this is normal. Update2 Here is the complete project.

    Read the article

  • 2D Tile based Game Collision problem

    - by iNbdy
    I've been trying to program a tile based game, and I'm stuck at the collision detection. Here is my code (not the best ^^): void checkTile(Character *c, int **map) { int x1,x2,y1,y2; /* Character position in the map */ c->upY = (c->y) / TILE_SIZE; // Top left corner c->downY = (c->y + c->h) / TILE_SIZE; // Bottom left corner c->leftX = (c->x) / TILE_SIZE; // Top right corner c->rightX = (c->x + c->w) / TILE_SIZE; // Bottom right corner x1 = (c->x + 10) / TILE_SIZE; // 10px from left side point x2 = (c->x + c->w - 10) / TILE_SIZE; // 10px from right side point y1 = (c->y + 10) / TILE_SIZE; // 10px from top side point y2 = (c->y + c->h - 10) / TILE_SIZE; // 10px from bottom side point /* Top */ if (map[c->upY][x1] > 2 || map[c->upY][x2] > 2) c->topCollision = 1; else c->topCollision = 0; /* Bottom */ if ((map[c->downY][x1] > 2 || map[c->downY][x2] > 2)) c->downCollision = 1; else c->downCollision = 0; /* Left */ if (map[y1][c->leftX] > 2 || map[y2][c->leftX] > 2) c->leftCollision = 1; else c->leftCollision = 0; /* Right */ if (map[y1][c->rightX] > 2 || map[y2][c->rightX] > 2) c->rightCollision = 1; else c->rightCollision = 0; } That calculates 8 collision points My moving function is like that: void movePlayer(Character *c, int **map) { if ((c->dirX == LEFT && !c->leftCollision) || (c->dirX == RIGHT && !c->rightCollision)) c->x += c->vx; if ((c->dirY == UP && !c->topCollision) || (c->dirY == DOWN && !c->downCollision)) c->y += c->vy; checkPosition(c, map); } and the checkPosition: void checkPosition(Character *c, int **map) { checkTile(c, map); if (c->downCollision) { if (c->state != JUMPING) { c->vy = 0; c->y = (c->downY * TILE_SIZE - c->h); } } if (c->leftCollision) { c->vx = 0; c->x = (c->leftX) * TILE_SIZE + TILE_SIZE; } if (c->rightCollision) { c->vx = 0; c->x = c->rightX * TILE_SIZE - c->w; } } This works, but sometimes, when the player is landing on ground, right and left collision points become equal to 1. So it's as if there were collision coming from left or right. Does anyone know why this is doing this?

    Read the article

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • Validating an XML document fragment against XML schema

    - by shylent
    Terribly sorry if I've failed to find a duplicate of this question. I have a certain document with a well-defined document structure. I am expressing that structure through an XML schema. That data structure is operated upon by a RESTful service, so various nodes and combinations of nodes (not the whole document, but fragments of it) are exposed as "resources". Naturally, I am doing my own validation of the actual data, but it makes sense to validate the incoming/outgoing data against the schema as well (before the fine-grained validation of the data). What I don't quite grasp is how to validate document fragments given the schema definition. Let me illustrate: Imagine, the example document structure is: <doc-root> <el name="foo"/> <el name="bar"/> </doc-root> Rather a trivial data structure. The schema goes something like this: <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:element name="doc-root"> <xsd:complexType> <xsd:sequence> <xsd:element name="el" type="myCustomType" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:complexType name="myCustomType"> <xsd:attribute name="name" use="required" /> </xsd:complexType> </xsd:schema> Now, imagine, I've just received a PUT request to update an 'el' object. Naturally, I would receive not the full document or not any xml, starting with 'doc-root' at its root, but the 'el' element itself. I would very much like to validate it against the existing schema then, but just running it through a validating parser wouldn't work, since it will expect a 'doc-root' at the root. So, again, the question is, - how can one validate a document fragment against an existing schema, or, perhaps, how can a schema be written to allow such an approach. Hope it made sense.

    Read the article

  • Qt4: QPrinter / QPainter only prints the first document

    - by hurikhan77
    The problem is that my application only prints the first document fine. The second document is empty, only the page number is printed, the rest of the page is empty. In Qt4, I'm initializing the printer in the main.cpp in the following way: mw->printer = new QPrinter(QPrinter::HighResolution); mw->printer->setPaperSize(QPrinter::A5); mw->printer->setNumCopies(2); mw->printer->setColorMode(QPrinter::GrayScale); QPrintDialog *dialog = new QPrintDialog(mw->printer, mw); dialog->setWindowTitle(QObject::tr("Printer Setup")); if (dialog->exec() == QDialog::Accepted) { mw->printer->setFullPage(TRUE); return a.exec (); } This works fine for printing the first document from the application: qDebug("Printing"); QPainter p; if (!p.begin(printer)) { qDebug("Printing aborted"); return; } Q3PaintDeviceMetrics metrics(p.device()); int dpiy = metrics.logicalDpiY(); int dpix = metrics.logicalDpiX(); int tmargin = (int) ((marginTop / 2.54) * dpiy); int bmargin = (int) ((marginBottom / 2.54) * dpiy); int lmargin = (int) ((marginLeft / 2.54) * dpix); int rmargin = (int) ((marginRight / 2.54) * dpix); QRect body(lmargin, tmargin, metrics.width() - (lmargin + rmargin), metrics.height() - (tmargin + bmargin)); QString document; /* ... app logic to write a richtext document */ Q3SimpleRichText richText(QString("<qt>%1</qt>").arg(document), QFont("Arial", fontSize)); richText.setWidth(&p, body.width()); QRect view(body); int page = 1; do { // draw text richText.draw(&p, body.left(), body.top(), view, colorGroup()); view.moveBy(0, body.height()); p.translate(0, -body.height()); // insert page number p.drawText(view.right() - p.fontMetrics().width(QString::number(page)), view.bottom() + p.fontMetrics().ascent() + 5, QString::number(page)); // exit loop on last page if (view.top () >= richText.height ()) break; printer->newPage(); page++; } while (TRUE); if (!p.end()) qDebug("Print painter yielded failure"); But when this routine runs the second time, it does not print the document. It will just print an empty page but still with the page number on it. This worked fine before with Qt3.

    Read the article

  • Building InstallShield based Installers using Team Build 2010

    - by jehan
    Last few weeks, I have been working on Application Packaging stuff using all the widely used tools like InstallShield, WISE, WiX and Visual Studio Installer. So, I thought it would be good to post about how to Build the Installers developed using these tools with Team Build 2010. This post will focus on how to build the InstallShield generated packages using Team Build 2010. For the release of VS2010, Microsoft has partnered with Flexera who are the makers of InstallShield to create InstallShield Limited Edition, especially for the customers of Visual Studio. First Microsoft planned to release WiX (Windows Installer Xml) with VS2010, but later Microsoft dropped  WiX from VS2010 due to reasons which are best known to them and partnered with InstallShield for Limited Edition. It disappointed lot of people because InstallShield Limited Edition provides only few features of InstallShield and it may not feasable to build complex installer packages using this and it also requires License, where as WiX is an open source with no license costs and it has proved efficient in building most complex packages. Only the last three features are available in InstallShield Limited Edition from the total features offered by InstallShield as shown in below list.                                                                                            Feature Limited Edition for Visual Studio 2010 Standalone Build System Maintain a clean build machine by using only the part of InstallShield that compiles the installations. InstallShield Best Practices Validation Suite Avoid common installation issues. Try and Die Functionality RCreate a fully functional trial version of your product. InstallShield Repackager Create Windows Installer setups from any legacy installation. Multilingual Support Present installation text in up to 35 languages. Microsoft App-V™ Support Deploy your applications as App-V virtual packages that run without conflict. Industry-Standard InstallScript Achieve maximum flexibility in your installations. Dialog Editor Modify the layout of existing end-user dialogs, create new custom dialogs, and more. Patch Creation Build updates and patches for your products. Setup Prerequisite Editor Easily control prerequisite restart behavior and source locations. String Editor View Control the localizable text strings displayed at run time with this spreadsheet-like table. Text File Changes View Configure search-and-replace actions for content in text files to be modified at run time. Virtual Machine Detection Block your installations from running on virtual machines. Unicode Support Improve multi-language installation development. Support for 64-Bit COM Extraction Extract COM data from a 64-bit COM server. Windows Installer Installation Chaining Add MSI packages to your main installation and chain them together. XML Support Save time by quickly testing XML configuration changes to installation projects. Billboard Support for Custom Branding Display Adobe Flash billboards and other graphic files during the install process. SaaS Support (IIS 7 and SSL Technologies) Easily deploy Windows-based Web applications. Project Assistant Jumpstart a project by using a simplified set of views. Support for Digital Signatures Save time by digitally signing all your files at build time. Easily Run Custom Actions Schedule a custom action to run at precisely the right moment in your installation. Installation Prerequisites Check for and install prerequisites before your installation is executed. To create a InstallShield project in Visual Studio and Build it using Team Build 2010, first you have to add the InstallShield Project template  to your Solution file. If you want to use InstallShield Limited edition you can add it from FileàNewà project àother Project Types àSetup and Deploymentà InstallShield LE and if you are using other versions of InstallShield, then you have to add it from  from FileàNewà project àInstallShield Projects. Here, I’m using  InstallShield 2011 Premier edition as I already have it Installed. I have created a simple package for TailSpin Application which has a Feature called Web, few components and a IIS Web Site for  TailSpin application.   Before started working on this, I thought I may need to build the package by calling invoke process activity in build process template or have to create a new custom activity. But, it got build without any changes to build process template. But, it was failing with below error message. C:\Program Files (x86)\MSBuild\InstallShield\2011\InstallShield.targets (68): The "InstallShield.Tasks.InstallShield" task could not be loaded from the assembly C:\Program Files (x86)\MSBuild\InstallShield\2010Limited\InstallShield.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files(x86)\MSBuild\InstallShield\2011\InstallShield.Tasks.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. This error is due to 64-bit build machine which I’m using. This issue will be replicable if you are queuing a build on a 64-bit build machine. To avoid this you have to ensure that you configured the build definition for your InstallShield project to load the InstallShield.Tasks.dll file (which is a 32-bit file); otherwise, you will encounter this build error informing you that the InstallShield.Tasks.dll file could not be loaded. To select the 32-bit version of MSBuild, click the Process tab of your build definition in Team Explorer. Then, under the Advanced node, find the MSBuild Platform setting, and select x86. Note that if you are using a 32-bit build machine, you can select either Auto or x86 for the MSBuild Platform setting.  Once I did above changes, the build got successful.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >