Search Results

Search found 2639 results on 106 pages for 'pseudo streaming'.

Page 98/106 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Binary socket and policy file in Flex

    - by Daniil
    Hi, I'm trying to evaluate whether Flex can access binary sockets. Seems that there's a class calles Socket (flex.net package). The requirement is that Flex will connect to a server serving binary data. It will then subscribe to data and receive the feed which it will interpret and display as a chart. I've never worked with Flex, my experience lies with Java, so everything is new to me. So I'm trying to quickly set something simple up. The Java server expects the following: DataInputStream in = ..... byte cmd = in.readByte(); int size = in.readByte(); byte[] buf = new byte[size]; in.readFully(buf); ... do some stuff and send binary data in something like out.writeByte(1); out.writeInt(10000); ... etc... Flex, needs to connect to a localhost:6666 do the handshake and read data. I got something like this: try { var socket:Socket = new Socket(); socket.connect('192.168.110.1', 9999); Alert.show('Connected.'); socket.writeByte(108); // 'l' socket.writeByte(115); // 's' socket.writeByte(4); socket.writeMultiByte('HHHH', 'ISO-8859-1'); socket.flush(); } catch (err:Error) { Alert.show(err.message + ": " + err.toString()); } The first thing is that Flex does a <policy-file-request/>. I've modified the server to respond with: <?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "/xml/dtds/cross-domain-policy.dtd"> <cross-domain-policy> <site-control permitted-cross-domain-policies="master-only"/> <allow-access-from domain="192.168.110.1" to-ports="*" /> </cross-domain-policy> After that - EOFException happens on the server and that's it. So the question is, am I approaching whole streaming data issue wrong when it comes to Flex? Am I sending the policy file wrong? Unfortunately, I can't seem to find a good solid example of how to do it. It seems to me that Flex can do binary Client-Server application, but I personally lack some basic knowledge when doing it. I'm using Flex 3.5 in IntelliJ IDEA IDE. Any help is appreciated. Thank you!

    Read the article

  • Confused as to how to validate spring mvc form, what are my options?

    - by Blankman
    Latest spring mvc, using freemarker. Hoping someone could tell me what my options are in terms of validating a form with spring mvc, and what the recommend way would be to do this. I have a form that doesn't map directly to a model, it has input fields that when posted, will be used to initialze 2 model objects which I will then need to validate, and if they pass I will save them. If they fail, I want to return back to the form, pre-fill the values with what the user entered and display the error messages. I have read here and there about 2 methods, once of which I have done and understand how it works: @RequestMapping(...., method = RequestMethod.POST) public ModelAndView myMethod(@Valid MyModel, BindingResult bindingResult) { ModelAndView mav = new ModelAndView("some/view"); mav.addObject("mymodel", myModel); if(bindingResult.hasErrors()) { return mav; } } Now this worked if my form mapped directly to the form, but in my situation I have: form fields that don't map to any specific model, they have a few properties from 2 models. before validation occurrs, I need to create the 2 models manually, set the values from the values from the form, and manually set some properties also: Call validate on both models (model1, model2), and append these error messages to the errors collection which I need to pass back to the same view page if things don't work. when the form posts, I have to do some database calls, and based on those results may need to add additional messages to the errors collection Can someone tell me how to do this sort of validation? Pseudo code below: Model1 model1 = new Model1(); Model2 model2 = new Model2(); // manually or somehow automatically set the posted form values to model1 and model2. // set some fields manually, not from posted form model1.setProperty10(GlobalSettings.getDefaultProperty10()); model2.setProperty11(GlobalSettings.getDefaultProperty11()); // db calls, if they fail, add to errors collection if(bindingResult.hasErrors()) { return mav; } // validation passed, save Model1Service.save(model1); Model2Service.save(model2); redirect to another view Update I have using the JSR 303 annotations on my models right now, and it would great if I can use those still. Update II Please read the bounty description below for a summary of what I am looking for.

    Read the article

  • H.264 over RTP - Identify SPS and PPS Frames

    - by Toby
    I have a raw H.264 Stream from an IP Camera packed in RTP frames. I want to get raw H.264 data into a file so I can convert it with ffmpeg. So when I want to write the data into my raw H.264 file I found out it has to look like this: 00 00 01 [SPS] 00 00 01 [PPS] 00 00 01 [NALByte] [PAYLOAD RTP Frame 1] // Payload always without the first 2 Bytes -> NAL [PAYLOAD RTP Frame 2] [... until PAYLOAD Frame with Mark Bit received] // From here its a new Video Frame 00 00 01 [NAL BYTE] [PAYLOAD RTP Frame 1] .... So I get the SPS and the PPS from the Session Description Protocol out of my preceding RTSP communication. Additionally the camera sends the SPS and the PPSin two single messages before starting with the video stream itself. So I capture the messages in this order: 1. Preceding RTSP Communication here ( including SDP with SPS and PPS ) 2. RTP Frame with Payload: 67 42 80 28 DA 01 40 16 C4 // This is the SPS 3. RTP Frame with Payload: 68 CE 3C 80 // This is the PPS 4. RTP Frame with Payload: ... // Video Data Then there come some Frames with Payload and at some point a RTP Frame with the Marker Bit = 1. This means ( if I got it right) that I have a complete video frame. Afer this I write the Prefix Sequence ( 00 00 01 ) and the NALfrom the payload again and go on with the same procedure. Now my camera sends me after every 8 complete Video Frames the SPS and the PPS again. ( Again in two RTP Frames, as seen in the example above ). I know that especially the PPS can change in between streaming but that's not the problem. My questions are now: 1. Do I need to write the SPS/PPS every 8th Video Frame? If my SPS and my PPS don't change it should be enough to have them written at the very beginning of my file and nothing more? 2. How to distinguish between SPS/PPS and normal RTP Frames? In my C++ Code which parses the transmitted data I need make a difference between the RTP Frames with normal Payload an the ones carrying the SPS/PPS. How can I distinguish them? Okay the SPS/PPS frames are usually way smaller, but that's not a save call to rely on. Because if I ignore them I need to know which data I can throw away, or if I need to write them I need to put the 00 00 01 Prefix in front of them. ? Or is it a fixed rule that they occur every 8th Video Frame?

    Read the article

  • Network communication for a turn based board game

    - by randooom
    Hi all, my first question here, so please don't be to harsh if something went wrong :) I'm currently a CS student (from Germany, if this info is of any use ;) ) and we got a, free selectable, programming assignment, which we have to write in a C++/CLI Windows Forms Application. My team, two others and me, decided to go for a network-compatible port of the board game Risk. We divided the work in 3 Parts, namely UI, game logic and network. Now we're on the part where we have to get everything working together and the big question mark is, how to get the clients synchronized with each other? Our approach so far is, that each client has all information necessary to calculate and/or execute all possible actions. Actually the clients have all information available at all, aside from the game-initializing phase (add players, select map, etc.), which needs one "super-client" with some extra stuff to control things. This is the standard scenario of our approach: player performs action, the action is valid and got executed on the players client action is sent over the network action is executed on the other clients The design (i.e. no or code so far) we came up with so far, is something like the following pseudo sequence diagram. Gui, Controller and Network implement all possible actions (i.e. all actions which change data) as methods from an interface. So each part can implement the method in a way to get their job done. Example with Action(): On the player side's Client: Player-->Gui.Action() Gui-->Controller.Action() Controller-->Logic.Action (Logic.Action() == NoError)? Controller-->Network.Action() Network-->Parser.ParseAction() Network.Send(msg) On all other clients: Network.Recv(msg) Network-->Parser.Deparse(msg) Parser-->Logic.Action() Logic-->Gui.Action() The questions: Is this a viable approach to our task? Any better/easier way to this? Recommendations, critique? Our knowledge (so you can better target your answer): We are on the beginner side, in regards to programming on a somewhat larger projects with a small team. All of us have some general programming experience and basic understanding of the .Net Libraries and Windows Forms. If you need any further information, please feel free to ask.

    Read the article

  • NSTimer as a self-targeting ivar.

    - by Matt Wilding
    I have come across an awkward situation where I would like to have a class with an NSTimer instance variable that repeatedly calls a method of the class as long as the class is alive. For illustration purposes, it might look like this: // .h @interface MyClock : NSObject { NSTimer* _myTimer; } - (void)timerTick; @end - // .m @implementation MyClock - (id)init { self = [super init]; if (self) { _myTimer = [[NSTimer scheduledTimerWithTimeInterval:1.0f target:self selector:@selector(timerTick) userInfo:nil repeats:NO] retain]; } return self; } - (void)dealloc { [_myTimer invalidate]; [_myTImer release]; [super dealloc]; } - (void)timerTick { // Do something fantastic. } @end That's what I want. I don't want to to have to expose an interface on my class to start and stop the internal timer, I just want it to run while the class exists. Seems simple enough. But the problem is that NSTimer retains its target. That means that as long as that timer is active, it is keeping the class from being dealloc'd by normal memory management methods because the timer has retained it. Manually adjusting the retain count is out of the question. This behavior of NSTimer seems like it would make it difficult to ever have a repeating timer as an ivar, because I can't think of a time when an ivar should retain its owning class. This leaves me with the unpleasant duty of coming up with some method of providing an interface on MyClock that allows users of the class to control when the timer is started and stopped. Besides adding unneeded complexity, this is annoying because having one owner of an instance of the class invalidate the timer could step on the toes of another owner who is counting on it to keep running. I could implement my own pseudo-retain-count-system for keeping the timer running but, ...seriously? This is way to much work for such a simple concept. Any solution I can think of feels hacky. I ended up writing a wrapper for NSTimer that behaves exactly like a normal NSTimer, but doesn't retain its target. I don't like it, and I would appreciate any insight.

    Read the article

  • Speed Problem with Wireless Connectivity on Cisco 877w

    - by Carl Crawley
    Having a bit of a weird one with my local LAN setup. I recently installed a Cisco 877W router on my DSL2+ connection and all is working really well.. Upgraded the IOS to 12.4 and my wired clients are streaming connectivity superfast at 1.3mb/s. However, there seems to be an issue with my wireless clients - I can't seem to stream any data across the local wireless connection (LAN) and using the Internet, whilst responsive enough isn't really comparable with the wired connection speed. For example, all devices are connected to an 8 Port Gb switch on FE0 from the Router with a NAS disk and on my wired clients, I can transfer/stream etc absolutely fine - however, transferring a local 700Mb file on my local LAN estimates 7-8 hours to transfer :( The Wireless config is as follows : interface Dot11Radio0 description WIRELESS INTERFACE no ip address ! encryption mode ciphers tkip ! ssid [MySSID] ! speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 channel 2462 station-role root rts threshold 2312 world-mode dot11d country GB indoor bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 spanning-disabled bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding All devices are connected to the Gb Switch which is connected to FE0 with the following: Hardware is Fast Ethernet, address is 0021.a03e.6519 (bia 0021.a03e.6519) Description: Uplink to Switch MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s ARP type: ARPA, ARP Timeout 04:00:00 Last input never, output never, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 14000 bits/sec, 19 packets/sec 5 minute output rate 167000 bits/sec, 23 packets/sec 177365 packets input, 52089562 bytes, 0 no buffer Received 919 broadcasts, 0 runts, 0 giants, 0 throttles 260 input errors, 260 CRC, 0 frame, 0 overrun, 0 ignored 0 input packets with dribble condition detected 156673 packets output, 106218222 bytes, 0 underruns 0 output errors, 0 collisions, 2 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out Not sure why I'm having problems on the wireless and I've reached the end of my Cisco knowledge... Thanks for any pointers! Carl

    Read the article

  • Linq-to-SQL: How to perform a count on a sub-select

    - by Peter Bridger
    I'm still trying to get my head round how to use LINQ-to-SQL correctly, rather than just writing my own sprocs. In the code belong a userId is passed into the method, then LINQ uses this to get all rows from the GroupTable tables matching the userId. The primary key of the GroupUser table is GroupUserId, which is a foreign key in the Group table. /// <summary> /// Return summary details about the groups a user belongs to /// </summary> /// <param name="userId"></param> /// <returns></returns> public List<Group> GroupsForUser(int userId) { DataAccess.KINv2DataContext db = new DataAccess.KINv2DataContext(); List<Group> groups = new List<Group>(); groups = (from g in db.Groups join gu in db.GroupUsers on g.GroupId equals gu.GroupId where g.Active == true && gu.UserId == userId select new Group { Name = g.Name, CreatedOn = g.CreatedOn }).ToList<Group>(); return groups; } } This works fine, but I'd also like to return the total number of Users who are in a group and also the total number of Contacts that fall under ownership of the group. Pseudo code ahoy! /// <summary> /// Return summary details about the groups a user belongs to /// </summary> /// <param name="userId"></param> /// <returns></returns> public List<Group> GroupsForUser(int userId) { DataAccess.KINv2DataContext db = new DataAccess.KINv2DataContext(); List<Group> groups = new List<Group>(); groups = (from g in db.Groups join gu in db.GroupUsers on g.GroupId equals gu.GroupId where g.Active == true && gu.UserId == userId select new Group { Name = g.Name, CreatedOn = g.CreatedOn, // ### This is the SQL I would write to get the data I want ### MemberCount = ( SELECT COUNT(*) FROM GroupUser AS GU WHERE GU.GroupId = g.GroupId ), ContactCount = ( SELECT COUNT(*) FROM Contact AS C WHERE C.OwnerGroupId = g.GroupId ) // ### End of extra code ### }).ToList<Group>(); return groups; } }

    Read the article

  • Delay keyboard input help

    - by Stradigos
    I'm so close! I'm using the XNA Game State Management example found here and trying to modify how it handles input so I can delay the key/create an input buffer. In GameplayScreen.cs I've declared a double called elapsedTime and set it equal to 0. In the HandleInput method I've changed the Key.Right button press to: if (keyboardState.IsKeyDown(Keys.Left)) movement.X -= 50; if (keyboardState.IsKeyDown(Keys.Right)) { elapsedTime -= gameTime.ElapsedGameTime.TotalMilliseconds; if (elapsedTime <= 0) { movement.X += 50; elapsedTime = 10; } } else { elapsedTime = 0; } The pseudo code: If the right arrow key is not pressed set elapsedTime to 0. If it is pressed, the elapsedTime equals itself minus the milliseconds since the last frame. If the difference then equals 0 or less, move the object 50, and then set the elapsedTime to 10 (the delay). If the key is being held down elapsedTime should never be set to 0 via the else. Instead, after elapsedTime is set to 10 after a successful check, the elapsedTime should get lower and lower because it's being subtracted by the TotalMilliseconds. When that reaches 0, it successfully passes the check again and moves the object once more. The problem is, it moves the object once per press but doesn't work if you hold it down. Can anyone offer any sort of tip/example/bit of knowledge towards this? Thanks in advance, it's been driving me nuts. In theory I thought this would for sure work. CLARIFICATION Think of a grid when your thinking about how I want the block to move. Instead of just fluidly moving across the screen, it's moving by it's width (sorta jumping) to the next position. If I hold down the key, it races across the screen. I want to slow this whole process down so that holding the key creates an X millisecond delay between it 'jumping'/moving by it's width. EDIT: Turns out gameTime.ElapsedGameTime.TotalMilliseconds is returning 0... all of the time. I have no idea why.

    Read the article

  • CSS Drop-Shadows Without Images

    - by Spencer B.
    I'm trying to use Nicolas Gallagher's brilliant CSS work on applying CSS drop-shadows to elements without images and without extra markup using the :before and :after pseudo-elements. His code is provided below... .drop-shadow { position:relative; width:90%; } .drop-shadow:before, .drop-shadow:after { content:""; position:absolute; z-index:-1; bottom:15px; left:10px; width:50%; height:20%; max-width:300px; -webkit-box-shadow:0 15px 10px rgba(0, 0, 0, 0.7); -moz-box-shadow:0 15px 10px rgba(0, 0, 0, 0.7); box-shadow:0 15px 10px rgba(0, 0, 0, 0.7); -webkit-transform:rotate(-3deg); -moz-transform:rotate(-3deg); -o-transform:rotate(-3deg); transform:rotate(-3deg); } .drop-shadow:after{ right:10px; left:auto; -webkit-transform:rotate(3deg); -moz-transform:rotate(3deg); -o-transform:rotate(3deg); transform:rotate(3deg); } I'm trying to target all images wrapped with an a tag, which in Wordpress are really full-size images that have been resized to a medium height and width in the backend. When the user clicks on the smaller image in the post, it opens up a new tab with the fullsize view of the image (I'm sure you're already familiar with this if you use Wordpress). For some reason, I can't get his code to work, and I'm wondering if I'm targeting this wrong within my CSS. Can you help? In place of the .drop-shadow class that he uses, I'm target all images wrapped with an a tag within the #main-i div. So, like this... #main-i a img Does anyone know how to target it better than I have so that I can get the drop shadows to be applied for all images within the specified div? Thanks for your help! P.S. An example of the image I am wanting to target with this CSS is the picture of the Haitian boy here: http://lifebridgecypress.org/our-heart/seventy-two/help-haiti

    Read the article

  • Vbscript - Creating a script that mirrors several sets of folders

    - by Kenny Bones
    Ok, this is my problem. I'm doing a logonscript that basically copies Microsoft Word templates from a serverpath on to a local path of each computer. This is done using a check for group membership. If MemberOf(ObjGroupDict, "g_group1") Then oShell.Run "%comspec% /c %LOGONSERVER%\SYSVOL\mydomain.com\scripts\ROBOCOPY \\server\Templates\Group1\OFFICE2003\ " & TemplateFolder & "\" & " * /E /XO", 0, True End If Previously I used the /MIR switch of robocopy, which is exellent. But, if a user is member of more than one group, the /MIR switch removes the content from the first group, since it's mirroring the content from the second group. Meaning, I can't have both contents. This is "solved" by not using the /MIR switch and just let the content get copied anyway. BUT the whole idea of having the templates on a server is so that I can control the content the users receive through the script. So if I delete a file or folder from the server path, this doesn't replicate on the local computer. Since I don't use the /MIR switch anymore. Comprende? So, what do I do? I did a small script that basically checks the folders and files and then removes them accordingly, but this actually ended up being the same functionality as the /MIR switch anyway. How do I solve this problem? Edit: I've found that what I actually need is a routine that scans my local template folder for files and folders and checks if the same structure exists in any of the source template folders. The server template folders are set up like this: \\fileserver\templates\group1\ \\fileserver\templates\group2\ \\fileserver\templates\group3\ \\fileserver\templates\group4\ \\fileserver\templates\group5\ \\fileserver\templates\group6\ And the script that does the copying is structures like this (pseudo): If User is MemberOf (group1) Then RoboCopy.exe \\fileserver\templates\group1\ c:\templates\workgroup *.* /E /XO End if If User is MemberOf (group2) Then RoboCopy.exe \\fileserver\templates\group2\ c:\templates\workgroup *.* /E /XO End if If User is MemberOf (group3) Then RoboCopy.exe \\fileserver\templates\group3\ c:\templates\workgroup *.* /E /XO End if Etc etc With the /E switch, I make sure it copies subfolders as well. And the /XO switch only copies files and folders that are newer than those in my local path. But it doesn't consider if the local path contains files or folders that doesn't exist on the server template path. So after the copying is done, I would like to check if any of the files or folders on my c:\templates\workgroup actually exists in either of the sources. And if they don't, delete them from my local path. Something that could be combined in these memberchecks perhaps?

    Read the article

  • Is there any way that an export-to-Excel function can be scalable?

    - by MusiGenesis
    Summary: ASP.Net website with a couple hundred users. Data is exported to Excel files which can be relatively large (~5 MB). In the pilot phase (just a few users), we are already seeing occasional errors on the server in the exporting method. Here's the stack trace: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.IO.MemoryStream.set_Capacity(Int32 value) at System.IO.MemoryStream.EnsureCapacity(Int32 value) at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.TrackingMemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.SparseMemoryStream.WriteAndCollapseBlocks(Byte[ ] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.SparseMemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.CompressEmulationStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Packaging.CompressStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Zip.ProgressiveCrcCalculatingStream.Write(Byte[] buffer, Int32 offset, Int32 count) at MS.Internal.IO.Zip.ZipIOModeEnforcingStream.Write(Byte[] buffer, Int32 offset, Int32 count) at System.IO.StreamWriter.Flush(Boolean flushStream, Boolean flushEncoder) at System.IO.StreamWriter.Write(String value) at System.Xml.XmlTextEncoder.Write(String text) at System.Xml.XmlTextWriter.WriteString(String text) at System.Xml.XmlText.WriteTo(XmlWriter w) at System.Xml.XmlAttribute.WriteContentTo(XmlWriter w) at System.Xml.XmlAttribute.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteContentTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteContentTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlElement.WriteContentTo(XmlWriter w) at System.Xml.XmlElement.WriteTo(XmlWriter w) at System.Xml.XmlDocument.WriteContentTo(XmlWriter xw) at System.Xml.XmlDocument.WriteTo(XmlWriter w) at System.Xml.XmlDocument.Save(Stream outStream) at OfficeOpenXml.ExcelWorksheet.Save() in C:\temp\XXXXXXXXXX\ExcelPackage\ExcelWorksheet.cs:line 605 at OfficeOpenXml.ExcelWorkbook.Save() in C:\temp\XXXXXXXXXX\ExcelPackage\ExcelWorkbook.cs:line 439 at OfficeOpenXml.ExcelPackage.Save() in C:\temp\XXXXXXXXXX\ExcelPackage\ExcelPackage.cs:line 348 at Framework.Exporting.Business.ExcelExport.BuildReport(HttpContext context) at WebUserControl.BtnXLS_Click(Object sender, EventArgs e) in C:\TEMP\XXXXXXXXXX\XXXXXXXXXX\OneList\UserControls\TicketReportExporter. ascx.cs:line 108 at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.Rai sePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequestWithNoAssert(HttpContext context) at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.XXXXXXXXXXX_aspx.ProcessRequest(HttpContext context) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\XXXX\cdf32a52\d1a5eabd\App_Web_enxdwlks.1.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpAppli cation.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Even aside from this particular problem, in general exporting to Excel requires the instantiation of huge Excel objects on the server for each request, which I've always assumed to mean disqualifies Excel for "serious" work on a highly-loaded server. Is there any general way to export to Excel in a "light-weight" manner? Would simply streaming the data into a CSV file work for this?

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • NavigationBar from UINavigationController not positioned correctly

    - by David Liu
    So, my iPad program has a pseudo-split view controller (one that I implemented, not base SDK one), and was working correctly a while ago. It has the basic layout (UINavController for master, content view controller for detail on right), but I have it so the master view doesn't disappear when rotated into portrait view. Recently, I added in a UITabBarController to contain the entire split view, which has made the navigation bar go wonky, while all the other views are positioned fine. In addition, the navigation bar only gets mispositioned when the program starts up while the iPad is in landscape, or upside-down portrait. If it starts out in portrait, everything is fine. Relevant Code: RootViewController.m: - (void)loadView { navController = [[NavigationBreadcrumbsController_Pad alloc] init]; ABTableViewController_Pad * tableViewController = [[ABTableViewController_Pad alloc] initWithNibName:@"ABTableView"]; master = [[UINavigationController_Pad alloc] initWithRootViewController:tableViewController]; [tableViewController release]; // Dummy blank UIViewcontroller detail = [[UIViewController alloc] init]; detail.view = [[[UIView alloc] init] autorelease]; [detail.view setBackgroundColor:[UIColor grayColor]]; self.view = [[[UIView alloc] init] autorelease]; self.view.backgroundColor = [UIColor blackColor]; [self positionViews]; [self.view addSubview:navToolbarController.view]; [self.view addSubview:master.view]; [self.view addSubview:detail.view]; } // Handles the respositioning of view into it's current orientation -(void)positionViews{ CGFloat tabBarOffset = 0; if(self.tabBarController){ tabBarOffset = self.tabBarController.tabBar.frame.size.height; } if(self.interfaceOrientation == UIInterfaceOrientationPortrait || self.interfaceOrientation == UIInterfaceOrientationPortraitUpsideDown) { self.view.frame = CGRectMake(0, 0, 768, 1004); navController.view.frame = CGRectMake(0,0,768,44); //adjust master view [master.view setFrame:CGRectMake(0, 44, 320, 1024 - 44 - 20 - tabBarOffset)]; //adjust detail view [detail.view setFrame:CGRectMake(321,44, 448, 1024 - 44 - 20 - tabBarOffset)]; } // Landscape Layout else{ self.view.frame = CGRectMake(0, 0, 748, 1024); navToolbarController.view.frame = CGRectMake(0,0,1024,44); //adjust master view [master.view setFrame:CGRectMake(0, 44, 320, 768 - 44 - 20 - tabBarOffset)]; //adjust detail view [detail.view setFrame:CGRectMake(321,44, 1024 - 320, 768 - 44 - 20 - tabBarOffset)]; } }

    Read the article

  • How do you implement position-sensitive zooming inside a JScrollPane?

    - by tucuxi
    I am trying to implement position-sensitive zooming inside a JScrollPane. The JScrollPane contains a component with a customized 'paint' that will draw itself inside whatever space it is allocated - so zooming is as easy as using a MouseWheelListener that resizes the inner component as required. But I also want zooming into (or out of) a point to keep that point as central as possible within the resulting zoomed-in (or -out) view (this is what I refer to as 'position-sensitive' zooming), similar to how zooming works in google maps. I am sure this has been done many times before - does anybody know the "right" way to do it under Java Swing?. Would it be better to play with Graphic2D's transformations instead of using JScrollPanes? Sample code follows: package test; import java.awt.*; import java.awt.event.*; import java.awt.geom.*; import javax.swing.*; public class FPanel extends javax.swing.JPanel { private Dimension preferredSize = new Dimension(400, 400); private Rectangle2D[] rects = new Rectangle2D[50]; public static void main(String[] args) { JFrame jf = new JFrame("test"); jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); jf.setSize(400, 400); jf.add(new JScrollPane(new FPanel())); jf.setVisible(true); } public FPanel() { // generate rectangles with pseudo-random coords for (int i=0; i<rects.length; i++) { rects[i] = new Rectangle2D.Double( Math.random()*.8, Math.random()*.8, Math.random()*.2, Math.random()*.2); } // mouse listener to detect scrollwheel events addMouseWheelListener(new MouseWheelListener() { public void mouseWheelMoved(MouseWheelEvent e) { updatePreferredSize(e.getWheelRotation(), e.getPoint()); } }); } private void updatePreferredSize(int n, Point p) { double d = (double) n * 1.08; d = (n > 0) ? 1 / d : -d; int w = (int) (getWidth() * d); int h = (int) (getHeight() * d); preferredSize.setSize(w, h); getParent().doLayout(); // Question: how do I keep 'p' centered in the resulting view? } public Dimension getPreferredSize() { return preferredSize; } private Rectangle2D r = new Rectangle2D.Float(); public void paint(Graphics g) { super.paint(g); g.setColor(Color.red); int w = getWidth(); int h = getHeight(); for (Rectangle2D rect : rects) { r.setRect(rect.getX() * w, rect.getY() * h, rect.getWidth() * w, rect.getHeight() * h); ((Graphics2D)g).draw(r); } } }

    Read the article

  • Move options between multiple dropdown lists

    - by Martha
    We currently have a form with the standard multi-select functionality of "here are the available options, here are the selected options, here are some buttons to move stuff back and forth." However, the client now wants the ability to not just select certain items, but to also categorize them. For example, given a list of books, they want to not just select the ones they own, but also the ones they've read, the ones they would like to read, and the ones they've heard about. (All examples fictional.) Thankfully, a selected item can only be in one category at a time. I can find many examples of moving items between listboxes, but not a single one for moving items between multiple listboxes. To add to the complication, the form needs to have two sets of list+categories, e.g. a list of movies that need to be categorized in addition to the aforementioned books. An additional problem is that sorting between lists is all well and good in the javascript-enabled world, but I can't really think of a good fallback interface for, say, mobile browsers. Maybe a pseudo-listbox with radio buttons next to each item? The master list of items will in general be very long - over 100 items, certainly, possibly many more. Any given category will most likely contain one or two selected items, but the possibility exists for a category to have dozens of selected items, or zero selected items. As far as OS and stuff, the site is in classic asp (quit snickering!), the server-side code is VBScript, and so far we've avoided the various Javascript libraries by the simple expedient of almost never using client-side scripting. This one form for this one client is currently the big exception. Give 'em an inch and they want a mile... Oh, and I have to add: I suck at Javascript, or really at any C-descendant language. Curly braces give me hives. I'd really, really like something I can just copy & paste into my page, maybe tweak some variable names, and never look at it again. A girl can dream, can't she? :)

    Read the article

  • How could I send live video stream to remote server from my phone !!!

    - by poc
    Hello , I have a problem about streaming my video to server in real-time from my phone. that is , let my phone be a IP Camera , and server can watch the live video from my phone I have googled many many solutions, but there is no one can solve my problem. I use MediaRecorder to record . it can save video file in the SD card correctly. then , I refered this page and used some method as followings skt = new Socket(InetAddress.getByName(hostname),port); pfd =ParcelFileDescriptor.fromSocket(skt); mediaRecorder.setOutputFile(pfd.getFileDescriptor()); now it seems I can send the video stream while recording however, I wrote a receiver-side program to receive the video stream from Android , but it doesn't work . is there any error? I can receive file , but I can not open the video file . I guess the problem may caused by file format ? there are outline of my code. in android side Socket skt = new Socket(hostIP,port); ParcelFileDescriptor pfd =ParcelFileDescriptor.fromSocket(skt); .... .... mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mediaRecorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT); mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); mediaRecorder.setOutputFile(pfd.getFileDescriptor()); ..... mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT); mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP); ..... mediaRecorder.start(); in receiver side (my ACER notebook) // anyway , I don't think the file extentions will do any effect File video = new File (strDate+".3gpp"); FileOutputStream fos; try { fos = new FileOutputStream(video); byte[] data = new byte[1024]; int count =-1; while( (count = fin.read(data,0,1024) ) !=-1) { fos.write(data,0,count); fos.flush(); } fos.close(); fin.close(); I confused a long time.... thanks in advance

    Read the article

  • Compose synthetic English phrase that would contain 160 bits of recoverable information

    - by Alexander Gladysh
    I have 160 bits of random data. Just for fun, I want to generate pseudo-English phrase to "store" this information in. I want to be able to recover this information from the phrase. Note: This is not a security question, I don't care if someone else will be able to recover the information or even detect that it is there or not. Criteria for better phrases, from most important to the least: Short Unique Natural-looking The current approach, suggested here: Take three lists of 1024 nouns, verbs and adjectives each (picking most popular ones). Generate a phrase by the following pattern, reading 20 bits for each word: Noun verb adjective verb, Noun verb adjective verb, Noun verb adjective verb, Noun verb adjective verb. Now, this seems to be a good approach, but the phrase is a bit too long and a bit too dull. I have found a corpus of words here (Part of Speech Database). After some ad-hoc filtering, I calculated that this corpus contains, approximately 50690 usable adjectives 123585 nouns 15301 verbs This allows me to use up to 16 bits per adjective (actually 16.9, but I can't figure how to use fractional bits) 15 bits per noun 13 bits per verb For noun-verb-adjective-verb pattern this gives 57 bits per "sentence" in phrase. This means that, if I'll use all words I can get from this corpus, I can generate three sentences instead of four (160 / 57 ˜ 2.8). Noun verb adjective verb, Noun verb adjective verb, Noun verb adjective verb. Still a bit too long and dull. Any hints how can I improve it? What I see that I can try: Try to compress my data somehow before encoding. But since the data is completely random, only some phrases would be shorter (and, I guess, not by much). Improve phrase pattern, so it would look better. Use several patterns, using the first word in phrase to somehow indicate for future decoding which pattern was used. (For example, use the last letter or even the length of the word.) Pick pattern according to the first bytes of the data. ...I'm not that good with English to come up with better phrase patterns. Any suggestions? Use more linguistics in the pattern. Different tenses etc. ...I guess, I would need much better word corpus than I have now for that. Any hints where can I get a suitable one?

    Read the article

  • Trying to style the first tbody different than others without introducing another class.

    - by mwiik
    I have a table with multiple tbody's, each of which has a classed row, and I want it so that the classed row in the first tbody has style differences, but am unable to get tbody:first-child to work in any browser. Perhaps I am missing something, or maybe there is a workaround. Ideally, I would like to provide the programmers with a single tbody section they can use as a template, but will otherwise have to add a class to the first tbody, making for an extra test in the programming. The html is straightforward: <tbody class="subGroup"> <tr class="subGroupHeader"> <th colspan="8">All Grades: Special Education</th> <td class="grid" colspan="2"><!-- contains AMO line --></td> <td><!-- right 100 --></td> </tr> <tr>...</tr> <!-- several more rows of data --> </tbody> There are several tbody's per table. I want to style the th and td's within tr.subGroupHeader in the very first tbody differently than the rest. Just to illustrate, I want to add a border-top to the tr.subGroupHeader cells. The tr.subGroupHeader will be styled with a border-top, such as: table.databargraph.continued tr.subGroupHeader th, table.databargraph.continued tr.subGroupHeader td { border-top: 6px solid red; } For the first tbody, I am trying: table.databargraph.continued tbody:first-child tr.subGroupHeader th { border-top: 6px solid blue ; } However, this doesn't seem to work in any browser (I've tested in Safari, Opera, Firefox, and PrinceXML, all on my Mac) Curiously, the usually excellent Xyle Scope tool indicates that the blue border should be taking precedence, though it obviously is not. See the screenshot at http://s3.amazonaws.com/ember/kUD8DHrz06xowTBK3qpB2biPJrLWTZCP_o.png This screenshot shows (top left) the American Indian th is selected, and (bottom right), shows (via black instead of gray text for the css declaration), that, indeed, the blue border should be given precedence. Yet the border is red. I may be missing something fundamental, like pseudo-classes not working for tbodys at all... This really only needs to work in PrinceXML, and maybe Safari so I can see what I'm doing with webkit-based css tools. Note I did try a selector like tr.subGroupHeader:first-child, but such tr's apparently consider the tbody the parent (as I would suspect), thus made every border blue. Thanks...

    Read the article

  • XPath query returning 'false' in SimpleXML

    - by Drew
    Hi all, I have an xml fragment as such: <meta_tree type="root"> <meta_data> <meta_cat>Content Provider</meta_cat> <data>Mammoth</data> </meta_data> <meta_data> <meta_cat>Genre</meta_cat> <data>Games</data> </meta_data> <meta_data> <meta_cat>Channel Name</meta_cat> <data>Games Trailers</data> </meta_data> <meta_data> <meta_cat>Collection</meta_cat> <data>Strategy</data> </meta_data> <meta_data> <meta_cat>Custom 1</meta_cat> <data>PC</data> </meta_data> <meta_data> <meta_cat>DRM Protected</meta_cat> <data>N</data> </meta_data> <meta_data> <meta_cat>Aspect Ratio</meta_cat> <data>16:9</data> </meta_data> <meta_data> <meta_cat>Streaming Type</meta_cat> <data>VOD</data> </meta_data> </meta_tree> which I garnered from the snippet of $meta_tree->asXML(). So given that, I need to have an xpath query for each element, so I'm using: $meta_tree->xpath("/meta_data[meta_cat='Content Provider']"); but this returns false. I have tried: "/meta_tree/meta_data[meta_cat='Content Provider']" "//meta_data[meta_cat='Content Provider']" I've been using AquaPath, which validates my query, so I'm not sure what I'm doing wrong. Anyone got any ideas? DJS.

    Read the article

  • How do I do high quality scaling of a image?

    - by pbhogan
    I'm writing some code to scale a 32 bit RGBA image in C/C++. I have written a few attempts that have been somewhat successful, but they're slow and most importantly the quality of the sized image is not acceptable. I compared the same image scaled by OpenGL (i.e. my video card) and my routine and it's miles apart in quality. I've Google Code Searched, scoured source trees of anything I thought would shed some light (SDL, Allegro, wxWidgets, CxImage, GD, ImageMagick, etc.) but usually their code is either convoluted and scattered all over the place or riddled with assembler and little or no comments. I've also read multiple articles on Wikipedia and elsewhere, and I'm just not finding a clear explanation of what I need. I understand the basic concepts of interpolation and sampling, but I'm struggling to get the algorithm right. I do NOT want to rely on an external library for one routine and have to convert to their image format and back. Besides, I'd like to know how to do it myself anyway. :) I have seen a similar question asked on stack overflow before, but it wasn't really answered in this way, but I'm hoping there's someone out there who can help nudge me in the right direction. Maybe point me to some articles or pseudo code... anything to help me learn and do. Here's what I'm looking for: 1. No assembler (I'm writing very portable code for multiple processor types). 2. No dependencies on external libraries. 3. I am primarily concerned with scaling DOWN, but will also need to write a scale up routine later. 4. Quality of the result and clarity of the algorithm is most important (I can optimize it later). My routine essentially takes the following form: DrawScaled( uint32 *src, uint32 *dst, src_x, src_y, src_w, src_h, dst_x, dst_y, dst_w, dst_h ); Thanks! UPDATE: To clarify, I need something more advanced than a box resample for downscaling which blurs the image too much. I suspect what I want is some kind of bicubic (or other) filter that is somewhat the reverse to a bicubic upscaling algorithm (i.e. each destination pixel is computed from all contributing source pixels combined with a weighting algorithm that keeps things sharp. EXAMPLE: Here's an example of what I'm getting from the wxWidgets BoxResample algorithm vs. what I want on a 256x256 bitmap scaled to 55x55. And finally: the original 256x256 image

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • Is this a legitimate implementation of a 'remember me' function for my web app?

    - by user246114
    Hi, I'm trying to add a "remember me" feature to my web app to let a user stay logged in between browser restarts. I think I got the bulk of it. I'm using google app engine for the backend which lets me use java servlets. Here is some pseudo-code to demo: public class MyServlet { public void handleRequest() { if (getThreadLocalRequest().getSession().getAttribute("user") != null) { // User already has session running for them. } else { // No session, but check if they chose 'remember me' during // their initial login, if so we can have them 'auto log in' // now. Cookie[] cookies = getThreadLocalRequest().getCookies(); if (cookies.find("rememberMePlz").exists()) { // The value of this cookie is the cookie id, which is a // unique string that is in no way based upon the user's // name/email/id, and is hard to randomly generate. String cookieid = cookies.find("rememberMePlz").value(); // Get the user object associated with this cookie id from // the data store, would probably be a two-step process like: // // select * from cookies where cookieid = 'cookieid'; // select * from users where userid = 'userid fetched from above select'; User user = DataStore.getUserByCookieId(cookieid); if (user != null) { // Start session for them. getThreadLocalRequest().getSession() .setAttribute("user", user); } else { // Either couldn't find a matching cookie with the // supplied id, or maybe we expired the cookie on // our side or blocked it. } } } } } // On first login, if user wanted us to remember them, we'd generate // an instance of this object for them in the data store. We send the // cookieid value down to the client and they persist it on their side // in the "rememberMePlz" cookie. public class CookieLong { private String mCookieId; private String mUserId; private long mExpirationDate; } Alright, this all makes sense. The only frightening thing is what happens if someone finds out the value of the cookie? A malicious individual could set that cookie in their browser and access my site, and essentially be logged in as the user associated with it! On the same note, I guess this is why the cookie ids must be difficult to randomly generate, because a malicious user doesn't have to steal someone's cookie - they could just randomly assign cookie values and start logging in as whichever user happens to be associated with that cookie, if any, right? Scary stuff, I feel like I should at least include the username in the client cookie such that when it presents itself to the server, I won't auto-login unless the username+cookieid match in the DataStore. Any comments would be great, I'm new to this and trying to figure out a best practice. I'm not writing a site which contains any sensitive personal information, but I'd like to minimize any potential for abuse all the same, Thanks

    Read the article

  • What is causing this SQL 2005 Primary Key Deadlock between two real-time bulk upserts?

    - by skimania
    Here's the scenario: I've got a table called MarketDataCurrent (MDC) that has live updating stock prices. I've got one process called 'LiveFeed' which reads prices streaming from the wire, queues up inserts, and uses a 'bulk upload to temp table then insert/update to MDC table.' (BulkUpsert) I've got another process which then reads this data, computes other data, and then saves the results back into the same table, using a similar BulkUpsert stored proc. Thirdly, there are a multitude of users running a C# Gui polling the MDC table and reading updates from it. Now, during the day when the data is changing rapidly, things run pretty smoothly, but then, after market hours, we've recently started seeing an increasing number of Deadlock exceptions coming out of the database, nowadays we see 10-20 a day. The imporant thing to note here is that these happen when the values are NOT changing. Here's all the relevant info: Table Def: CREATE TABLE [dbo].[MarketDataCurrent]( [MDID] [int] NOT NULL, [LastUpdate] [datetime] NOT NULL, [Value] [float] NOT NULL, [Source] [varchar](20) NULL, CONSTRAINT [PK_MarketDataCurrent] PRIMARY KEY CLUSTERED ( [MDID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] - stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][1] [1]: http://farm5.static.flickr.com/4049/4690759452_6b94ff7b34.jpg I've got a Sql Profiler Trace Running, catching the deadlocks, and here's what all the graphs look like. stackoverflow wont let me post images until my reputation goes up to 10, so i'll add them as soon as you bump me up, hopefully as a result of this question. ![alt text][2] [2]: http://farm5.static.flickr.com/4035/4690125231_78d84c9e15_b.jpg Process 258 is called the following 'BulkUpsert' stored proc, repeatedly, while 73 is calling the next one: ALTER proc [dbo].[MarketDataCurrent_BulkUpload] @updateTime datetime, @source varchar(10) as begin transaction update c with (rowlock) set LastUpdate = getdate(), Value = t.Value, Source = @source from MarketDataCurrent c INNER JOIN #MDTUP t ON c.MDID = t.mdid where c.lastUpdate < @updateTime and c.mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') and c.value <> t.value insert into MarketDataCurrent with (rowlock) select MDID, getdate(), Value, @source from #MDTUP where mdid not in (select mdid from MarketDataCurrent with (nolock)) and mdid not in (select mdid from MarketData where LiveFeedTicker is not null and PriceSource like 'LiveFeed.%') commit And the other one: ALTER PROCEDURE [dbo].[MarketDataCurrent_LiveFeedUpload] AS begin transaction -- Update existing mdid UPDATE c WITH (ROWLOCK) SET LastUpdate = t.LastUpdate, Value = t.Value, Source = t.Source FROM MarketDataCurrent c INNER JOIN #TEMPTABLE2 t ON c.MDID = t.mdid; -- Insert new MDID INSERT INTO MarketDataCurrent with (ROWLOCK) SELECT * FROM #TEMPTABLE2 WHERE MDID NOT IN (SELECT MDID FROM MarketDataCurrent with (NOLOCK)) -- Clean up the temp table DELETE #TEMPTABLE2 commit To clarify, those Temp Tables are being created by the C# code on the same connection and are populated using the C# SqlBulkCopy class. To me it looks like it's deadlocking on the PK of the table, so I tried removing that PK and switching to a Unique Constraint instead but that increased the number of deadlocks 10-fold. I'm totally lost as to what to do about this situation and am open to just about any suggestion. HELP!!

    Read the article

  • Algorithm to split an article without breaking the reading flow or HTML code

    - by Victor Stanciu
    Hello, I have a very large database of articles, of varying lengths. The articles have HTML elements in them. I have to insert some ads (simple <script> elements) in the body of each article when it is displayed (I know, I hate ads that interrupt my reading too). Now, the problem is that each ad must be inserted at about the same position in each article. The simplest solution is to simply split the article on a fixed number of characters (without breaking words), and insert the ad code. This, however, runs the risk of inserting the ad in the middle of a HTML tag. I could go the regex way, but I was thinking about the following solution, using JS: Establish a character count threshold. For example, "the add should be inserted at about 200 words" Set accepted deviations in each direction, say -20, +20 characters. Loop through each text node inside the article, and while doing so, keep count of the total number of characters so far Once the count exceeds the threshold, make the following decision: 4.1. If count exceeds the threshold by a value lower that the positive accepted deviation (for example, 17 characters), insert the ad code just after the current text node. 4.2. If the count is greater than the sum of the threshold and the deviation, roll back to the previous text node, and make the same decision, only this time use the previous count and check if it's lower than the difference between the threshold and the deviation, and if not, insert the ad between the current node and the previous one. 4.3. If the 4.1 and 4.2 fail (which means that the previous node reached a too low character count and the current node a too high one), insert the ad after whatever character count is needed inside the current element. I know it's convoluted, but it's the first thing out of my mind and it has the advantage that, by trying to insert the ad between text nodes, perhaps it will not break the flow of the article as bad as it would if I would just stick it in (like the final 4.3 case) Here is some pseudo-code I put together, I don't trust my english-explaining skills: threshold = 200 deviation = 20 current_count = 0 for each node in article_nodes { previous_count = current_count current_count = current_count + node.length if current_count < threshold { continue // next interation } if current_count > threshold + deviation { if previous_count < threshdold - deviation { // insert ad in current node } else { // insert ad between the current and previous nodes } } else { // insert ad after the current node } break; } Am I over-complicating stuff, or am I missing a simpler, more elegant solution?

    Read the article

  • Multi-threaded Pooled Allocators

    - by Darren Engwirda
    I'm having some issues using pooled memory allocators for std::list objects in a multi-threaded application. The part of the code I'm concerned with runs each thread function in isolation (i.e. there is no communication or synchronization between threads) and therefore I'd like to setup separate memory pools for each thread, where each pool is not thread-safe (and hence fast). I've tried using a shared thread-safe singleton memory pool and found the performance to be poor, as expected. This is a heavily simplified version of the type of thing I'm trying to do. A lot has been included in a pseudo-code kind of way, sorry if it's confusing. /* The thread functor - one instance of MAKE_QUADTREE created for each thread */ class make_quadtree { private: /* A non-thread-safe memory pool for int linked list items, let's say that it's * something along the lines of BOOST::OBJECT_POOL */ pooled_allocator<int> item_pool; /* The problem! - a local class that would be constructed within each std::list as the * allocator but really just delegates to ITEM_POOL */ class local_alloc { public : //!! I understand that I can't access ITEM_POOL from within a nested class like //!! this, that's really my question - can I get something along these lines to //!! work?? pointer allocate (size_t n) { return ( item_pool.allocate(n) ); } }; public : make_quadtree (): item_pool() // only construct 1 instance of ITEM_POOL per // MAKE_QUADTREE object { /* The kind of data structures - vectors of linked lists * The idea is that all of the linked lists should share a local pooled allocator */ std::vector<std::list<int, local_alloc>> lists; /* The actual operations - too complicated to show, but in general: * * - The vector LISTS is grown as a quadtree is built, it's size is the number of * quadtree "boxes" * * - Each element of LISTS (each linked list) represents the ID's of items * contained within each quadtree box (say they're xy points), as the quadtree * is grown a lot of ID pop/push-ing between lists occurs, hence the memory pool * is important for performance */ } }; So really my problem is that I'd like to have one memory pool instance per thread functor instance, but within each thread functor share the pool between multiple std::list objects.

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >