Search Results

Search found 2264 results on 91 pages for 'odd rationale'.

Page 82/91 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Why differs floating-point precision in C# when separated by parantheses and when separated by state

    - by Andreas Larsen
    I am aware of how floating point precision works in the regular cases, but I stumbled on an odd situation in my C# code. Why aren't result1 and result2 the exact same floating point value here? const float A; // Arbitrary value const float B; // Arbitrary value float result1 = (A*B)*dt; float result2 = (A*B); result2 *= dt; From this page I figured float arithmetic was left-associative and that this means values are evaluated and calculated in a left-to-right manner. The full source code involves XNA's Quaternions. I don't think it's relevant what my constants are and what the VectorHelper.AddPitchRollYaw() does. The test passes just fine if I calculate the delta pitch/roll/yaw angles in the same manner, but as the code is below it does not pass: X Expected: 0.275153548f But was: 0.275153786f [TestFixture] internal class QuaternionPrecisionTest { [Test] public void Test() { JoystickInput input; input.Pitch = 0.312312432f; input.Roll = 0.512312432f; input.Yaw = 0.912312432f; const float dt = 0.017001f; float pitchRate = input.Pitch * PhysicsConstants.MaxPitchRate; float rollRate = input.Roll * PhysicsConstants.MaxRollRate; float yawRate = input.Yaw * PhysicsConstants.MaxYawRate; Quaternion orient1 = Quaternion.Identity; Quaternion orient2 = Quaternion.Identity; for (int i = 0; i < 10000; i++) { float deltaPitch = (input.Pitch * PhysicsConstants.MaxPitchRate) * dt; float deltaRoll = (input.Roll * PhysicsConstants.MaxRollRate) * dt; float deltaYaw = (input.Yaw * PhysicsConstants.MaxYawRate) * dt; // Add deltas of pitch, roll and yaw to the rotation matrix orient1 = VectorHelper.AddPitchRollYaw( orient1, deltaPitch, deltaRoll, deltaYaw); deltaPitch = pitchRate * dt; deltaRoll = rollRate * dt; deltaYaw = yawRate * dt; orient2 = VectorHelper.AddPitchRollYaw( orient2, deltaPitch, deltaRoll, deltaYaw); } Assert.AreEqual(orient1.X, orient2.X, "X"); Assert.AreEqual(orient1.Y, orient2.Y, "Y"); Assert.AreEqual(orient1.Z, orient2.Z, "Z"); Assert.AreEqual(orient1.W, orient2.W, "W"); } } Granted, the error is small and only presents itself after a large number of iterations, but it has caused me some great headackes.

    Read the article

  • How is IObservable<double>.Average supposed to work?

    - by Dan Tao
    Update Looks like Jon Skeet was right (big surprise!) and the issue was with my assumption about the Average extension providing a continuous average (it doesn't). For the behavior I'm after, I wrote a simple ContinuousAverage extension method, the implementation of which I am including here for the benefit of others who may want something similar: public static class ObservableExtensions { private class ContinuousAverager { private double _mean; private long _count; public ContinuousAverager() { _mean = 0.0; _count = 0L; } // undecided whether this method needs to be made thread-safe or not // seems that ought to be the responsibility of the IObservable (?) public double Add(double value) { double delta = value - _mean; _mean += (delta / (double)(++_count)); return _mean; } } public static IObservable<double> ContinousAverage(this IObservable<double> source) { var averager = new ContinuousAverager(); return source.Select(x => averager.Add(x)); } } I'm thinking of going ahead and doing something like the above for the other obvious candidates as well -- so, ContinuousCount, ContinuousSum, ContinuousMin, ContinuousMax ... perhaps ContinuousVariance and ContinuousStandardDeviation as well? Any thoughts on that? Original Question I use Rx Extensions a little bit here and there, and feel I've got the basic ideas down. Now here's something odd: I was under the impression that if I wrote this: var ticks = Observable.FromEvent<QuoteEventArgs>(MarketDataProvider, "MarketTick"); var bids = ticks .Where(e => e.EventArgs.Quote.HasBid) .Select(e => e.EventArgs.Quote.Bid); var bidsSubscription = bids.Subscribe( b => Console.WriteLine("Bid: {0}", b) ); var avgOfBids = bids.Average(); var avgOfBidsSubscription = avgOfBids.Subscribe( b => Console.WriteLine("Avg Bid: {0}", b) ); I would get two IObservable<double> objects (bids and avgOfBids); one would basically be a stream of all the market bids from my MarketDataProvider, the other would be a stream of the average of these bids. So something like this: Bid Avg Bid 1 1 2 1.5 1 1.33 2 1.5 It seems that my avgOfBids object isn't doing anything. What am I missing? I think I've probably misunderstood what Average is actually supposed to do. (This also seems to be the case for all of the aggregate-like extension methods on IObservable<T> -- e.g., Max, Count, etc.)

    Read the article

  • How to produce precisely-timed tone and silence?

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.

    Read the article

  • How does session_start lock in PHP?

    - by Morgan Cheng
    Originally, I just want to verify that session_start locks on session. So, I create a PHP file as below. Basically, if the pageview is even, the page sleeps for 10 seconds; if the pageview is odd, it doesn't. And, session_start is used to obtain the page view in $_SESSION. I tried to access the page in two tabs of one browser. It is not surprising that the first tab takes 10 seconds since I explicitly let it sleep. The second tab would not sleep, but it should be blocked by sessiont_start. That works as expected. To my surprise, the output of the second page shows that session_start takes almost no time. Actually, the whole page seems takes no time to load. But, the page does take 10 seconds to show in browser. obtained lock Cost time: 0.00016689300537109 Start 1269739162.1997 End 1269739162.1998 allover time elpased : 0.00032305717468262 The page views: 101 Does PHP extract session_start out of PHP page and execute it before other PHP statements? This is the code. <?php function float_time() { list($usec, $sec) = explode(' ', microtime()); return (float)$sec + (float)$usec; } $allover_start_time = float_time(); $start_time = float_time(); session_start(); echo "obtained lock<br/>"; $end_time = float_time(); $elapsed_time = $end_time - $start_time; echo "Cost time: $elapsed_time <br>"; echo "Start $start_time<br/>"; echo "End $end_time<br/>"; ob_flush(); flush(); if (isset($_SESSION['views'])) { $_SESSION['views'] += 1; } else { $_SESSION['views'] = 0; } if ($_SESSION['views'] % 2 == 0) { echo "sleep 10 seconds<br/>"; sleep(10); } $allover_end_time = float_time(); echo "allover time elpased : " . ($allover_end_time - $allover_start_time) . "<br/>"; echo "The page views: " . $_SESSION['views']; ?>

    Read the article

  • UINavigationBar unresponsive after canceling a UITableView search in nav controller in tab bar in a popover

    - by Mark
    Ok, this is an odd one and I can reproduce it with a new project easily. Here is the setup: I have a UISplitViewController. In the left side I have a UITabBarController. In this tab bar controller I have two UINavigationControllers. In the navigation controllers I have UITableViewControllers. These table views have search bars on them. Ok, what happens with this setup is that if I'm in portrait mode and bring up this view in the popover and I start a search in one of the table views and cancel it, the navigation bar becomes unresponsive. That is, the "back" button as well as the right side button cannot be clicked. If I do the exact same thing in landscape mode so we are not in a popover, this doesn't happen. The navigation bar stays responsive. So, the problem only seems to happen inside a popover. I've also noticed that if I do the search but click on an item in the search results which ends up loading something into the "detail view" of the split view and dismissing the popover, and then come back to the popover and then click the Cancel button for the search, the navigation bar is responsive. My application is a universal app and uses the same tab bar controller in the iPhone interface and it works there without this issue. As I mentioned above, I can easily reproduce this with a new project. Here are the steps if you want to try it out yourself: start new project - split view create new UITableViewController class (i named TableViewController) uncomment out the viewDidLoad method as well as the rightBarButtonItem line in viewDidLoad (so we will have an Edit button in the navigation bar) enter any values you want to return from numberOfSectioinsInTableView and numberOfRowsInSection methods open MainWindow.xib and do the following: please note that you will need to be viewing the xib in the middle "view mode" so you can expand the contents of the items drag a Tab Bar Controller into the xib to replace the Navigation Controller item drag a Navigation Controller into the xib as another item under the Tab Bar Controller delete the other two view controllers that are under the Tab Bar Controller (so, now our tab bar has just the one navigation controller on it) inside the navigation controller, drag in a Table View Controller and use it to replace the View Controller (Root View Controller) change the class of the new Table View Controller to the class created above (TableViewController for me) double-click on the Table View under the new Table View Controller to open it up (will be displayed in the tab bar inside the split view controller) drag a "Search Bar and Search Display" onto the table view save the xib run the project in simulator while in portrait mode, click on the Root List button to bring up popover notice the Edit button is clickable click in the Search box - we go into search mode click the Cancel button to exit search mode notice the Edit button no longer works So, can anyone help me figure out why this is happening? Thanks, Mark

    Read the article

  • How does MySQL's ORDER BY RAND() work?

    - by Eugene
    Hi, I've been doing some research and testing on how to do fast random selection in MySQL. In the process I've faced some unexpected results and now I am not fully sure I know how ORDER BY RAND() really works. I always thought that when you do ORDER BY RAND() on the table, MySQL adds a new column to the table which is filled with random values, then it sorts data by that column and then e.g. you take the above value which got there randomly. I've done lots of googling and testing and finally found that the query Jay offers in his blog is indeed the fastest solution: SELECT * FROM Table T JOIN (SELECT CEIL(MAX(ID)*RAND()) AS ID FROM Table) AS x ON T.ID >= x.ID LIMIT 1; While common ORDER BY RAND() takes 30-40 seconds on my test table, his query does the work in 0.1 seconds. He explains how this functions in the blog so I'll just skip this and finally move to the odd thing. My table is a common table with a PRIMARY KEY id and other non-indexed stuff like username, age, etc. Here's the thing I am struggling to explain SELECT * FROM table ORDER BY RAND() LIMIT 1; /*30-40 seconds*/ SELECT id FROM table ORDER BY RAND() LIMIT 1; /*0.25 seconds*/ SELECT id, username FROM table ORDER BY RAND() LIMIT 1; /*90 seconds*/ I was sort of expecting to see approximately the same time for all three queries since I am always sorting on a single column. But for some reason this didn't happen. Please let me know if you any ideas about this. I have a project where I need to do fast ORDER BY RAND() and personally I would prefer to use SELECT id FROM table ORDER BY RAND() LIMIT 1; SELECT * FROM table WHERE id=ID_FROM_PREVIOUS_QUERY LIMIT 1; which, yes, is slower than Jay's method, however it is smaller and easier to understand. My queries are rather big ones with several JOINs and with WHERE clause and while Jay's method still works, the query grows really big and complex because I need to use all the JOINs and WHERE in the JOINed (called x in his query) sub request. Thanks for your time!

    Read the article

  • unformatted input to a std::string instead of c-string from binary file.

    - by posop
    ok i have this program working using c-strings. I am wondering if it is possible to read in blocks of unformatted text to a std::string? I toyed arround with if >> but this reads in line by line. I've been breaking my code and banging my head against the wall trying to use std::string, so I thought it was time to enlist the experts. Here's a working program you need to supply a file "a.txt" with some content to make it run. i tried to fool around with: in.read (const_cast<char *>(memblock.c_str()), read_size); but it was acting odd. I had to do std::cout << memblock.c_str() to get it to print. and memblock.clear() did not clear out the string. anyway, if you can think of a way to use STL I would greatly appreciate it. Here's my program using c-strings // What this program does now: copies a file to a new location byte by byte // What this program is going to do: get small blocks of a file and encrypt them #include <fstream> #include <iostream> #include <string> int main (int argc, char * argv[]) { int read_size = 16; int infile_size; std::ifstream in; std::ofstream out; char * memblock; int completed = 0; memblock = new char [read_size]; in.open ("a.txt", std::ios::in | std::ios::binary | std::ios::ate); if (in.is_open()) infile_size = in.tellg(); out.open("b.txt", std::ios::out | std::ios::trunc | std::ios::binary); in.seekg (0, std::ios::beg);// get to beginning of file while(!in.eof()) { completed = completed + read_size; if(completed < infile_size) { in.read (memblock, read_size); out.write (memblock, read_size); } // end if else // last run { delete[] memblock; memblock = new char [infile_size % read_size]; in.read (memblock, infile_size % read_size + 1); out.write (memblock, infile_size % read_size ); } // end else } // end while } // main if you see anything that would make this code better please feel free to let me know.

    Read the article

  • How to delete multiple files with msbuild/web deployment project?

    - by Alex
    I have an odd issue with how msbuild is behaving with a VS2008 Web Deployment Project and would like to know why it seems to randomly misbehave. I need to remove a number of files from a deployment folder that should only exist in my development environment. The files have been generated by the web application during dev/testing and are not included in my Visual Studio project/solution. The configuration I am using is as follows: <!-- Partial extract from Microsoft Visual Studio 2008 Web Deployment Project --> <ItemGroup> <DeleteAfterBuild Include="$(OutputPath)data\errors\*.xml" /> <!-- Folder 1: 36 files --> <DeleteAfterBuild Include="$(OutputPath)data\logos\*.*" /> <!-- Folder 2: 2 files --> <DeleteAfterBuild Include="$(OutputPath)banners\*.*" /> <!-- Folder 3: 1 file --> </ItemGroup> <Target Name="AfterBuild"> <Message Text="------ AfterBuild process starting ------" Importance="high" /> <Delete Files="@(DeleteAfterBuild)"> <Output TaskParameter="DeletedFiles" PropertyName="deleted" /> </Delete> <Message Text="DELETED FILES: $(deleted)" Importance="high" /> <Message Text="------ AfterBuild process complete ------" Importance="high" /> </Target> The problem I have is that when I do a build/rebuild of the Web Deployment Project it "sometimes" removes all the files but other times it will not remove anything! Or it will remove only one or two of the three folders in the DeleteAfterBuild item group. There seems to be no consistency in when the build process decides to remove the files or not. When I've edited the configuration to include only Folder 1 (for example), it removes all the files correctly. Then adding Folder 2 and 3, it starts removing all the files as I want. Then, seeming at random times, I'll rebuild the project and it won't remove any of the files! I have tried moving these items to the ExcludeFromBuild item group (which is probably where it should be) but it gives me the same unpredictable result. Has anyone experienced this? Am I doing something wrong? Why does this happen?

    Read the article

  • NHibernate (3.1.0.4000) NullReferenceException using Query<> and NHibernate Facility

    - by TigerShark
    I have a problem with NHibernate, I can't seem to find any solution for. In my project I have a simple entity (Batch), but whenever I try and run the following test, I get an exception. I've triede a couple of different ways to perform a similar query, but almost identical exception for all (it differs in which LINQ method being executed). The first test: [Test] public void QueryLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .FirstOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) The second test: [Test] public void QueryLatestBatch2() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .OrderBy(x => x.Executed) .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.SingleOrDefault(IQueryable`1 source) However, this one is passing (using QueryOver<): [Test] public void QueryOverLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.QueryOver<Batch>() .OrderBy(x => x.Executed).Asc .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); Assert.That(batch.Executed, Is.LessThan(DateTime.Now)); } } Using the QueryOver< API is not bad at all, but I'm just kind of baffled that the Query< API isn't working, which is kind of sad, since the First() operation is very concise, and our developers really enjoy LINQ. I really hope there is a solution to this, as it seems strange if these methods are failing such a simple test. EDIT I'm using Oracle 11g, my mappings are done with FluentNHibernate registered through Castle Windsor with the NHibernate Facility. As I wrote, the odd thing is that the query works perfectly with the QueryOver< API, but not through LINQ.

    Read the article

  • WordPress Write Cache Problem with Multiple Sessions

    - by Volomike
    I'm working on a content dripper custom plugin in WordPress that my client asked me to build. He says he wants it to catch a page view event, and if it's the right time of day (24 hours since last post), to pull from a resource file and output another post. He needed it to also raise a flag and prevent other sessions from firing that same snippet of code. So, raise some kind of flag saying, "I'm posting that post, go away other process," and then it makes that post and releases the flag again. However, the strangest thing is occurring when placed under load with multiple sessions hitting the site with page views. It's firing instead of one post -- it's randomly doing like 1, 2, or 3 extra posts, with each one thinking that it was the right time to post because it was 24 hours past the time of the last post. Because it's somewhat random, I'm guessing that the problem is some kind of write caching where the other sessions don't see the raised flag just yet until a couple microseconds pass. The plugin was raising the "flag" by simply writing to the wp_options table with the update_option() API in WordPress. The other user sessions were supposed to read that value with get_option() and see the flag, and then not run that piece of code that creates the post because a given session was already doing it. Then, when done, I lower the flag and the other sessions continue as normal. But what it's doing is letting those other sessions in. To make this work, I was using add_action('loop_start','checkToAddContent'). The odd thing about that function though is that it's called more than once on a page, and in fact some plugins may call it. I don't know if there's a better event to hook. Even still, even if I find an event to hook that only runs once on a page view, I still have multiple sessions to contend with (different users who may view the page at the same time) and I want only one given session to trigger the content post when the post is due on the schedule. I'm wondering if there are any WordPress plugin devs out there who could suggest another event hook to latch on to, and to figure out another way to raise a flag that all sessions would see. I mean, I could use the shared memory API in PHP, but many hosting plans have that disabled. Can't use a cookie or session var because that's only one single session. About the only thing that might work across hosting plans would be to drop a file as a flag, instead. If the file is present, then one session has the flag. If the file is not present, then other sessions can attempt to get the flag. Sure, I could use the file route, but it's kind of immature in my opinion and I was wondering if there's something in WordPress I could do.

    Read the article

  • Why is cell phone software still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up? Edit: I believe a clarification on the multitasking point might be beneficial. I'll use my phone as an example, although the point is much more general. The phone can multitask and in fact does: you can listen to music and do something else at the same time. On the other hand, the way the software has been designed makes multitasking next to useless. (Ditto with the external touch screen: it can take touch commands, but only one application makes use of it, and only with 3 commands.) To take the multitasking example to the extreme, if I plug my phone into my laptop and it registers as an external disk, it doesn't allow any kind of operation: messages, calling, calendar, everything out of reach, although I can receive a call. No "battery life" issue there: it's charging while connected. BTW, another example of design below the current state of the art: I don't see a phone on the horizon which will remember where in an audio or video file you were when you stopped listening/watching it last time (podcasts are a good use case). Simplistic rewind/fast forward functionality only aggravates the problem.

    Read the article

  • Is there anything else I can do to optimize this MySQL query?

    - by Legend
    I have two tables, Table A with 700,000 entries and Table B with 600,000 entries. The structure is as follows: Table A: +-----------+---------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------------------+------+-----+---------+----------------+ | id | bigint(20) unsigned | NO | PRI | NULL | auto_increment | | number | bigint(20) unsigned | YES | | NULL | | +-----------+---------------------+------+-----+---------+----------------+ Table B: +-------------+---------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------------------+------+-----+---------+----------------+ | id | bigint(20) unsigned | NO | PRI | NULL | auto_increment | | number_s | bigint(20) unsigned | YES | MUL | NULL | | | number_e | bigint(20) unsigned | YES | MUL | NULL | | | source | varchar(50) | YES | | NULL | | +-------------+---------------------+------+-----+---------+----------------+ I am trying to find if any of the values in Table A are present in Table B using the following code: $sql = "SELECT number from TableA"; $result = mysql_query($sql) or die(mysql_error()); while($row = mysql_fetch_assoc($result)) { $number = $row['number']; $sql = "SELECT source, count(source) FROM TableB WHERE number_s < $number AND number_e > $number GROUP BY source"; $re = mysql_query($sql) or die(mysql_error); while($ro = mysql_fetch_array($re)) { echo $number."\t".$ro[0]."\t".$ro[1]."\n"; } } I was hoping that the query would go fast but then for some reason, it isn't terrible fast. My explain on the select (with a particular value of "number") gives me the following: mysql> explain SELECT source, count(source) FROM TableB WHERE number_s < 1812194440 AND number_e > 1812194440 GROUP BY source; +----+-------------+------------+------+-------------------------+------+---------+------+--------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------+-------------------------+------+---------+------+--------+----------------------------------------------+ | 1 | SIMPLE | TableB | ALL | number_s,number_e | NULL | NULL | NULL | 696325 | Using where; Using temporary; Using filesort | +----+-------------+------------+------+-------------------------+------+---------+------+--------+----------------------------------------------+ 1 row in set (0.00 sec) Is there any optimization that I can squeeze out of this? I tried writing a stored procedure for the same task but it doesn't even seem to work in the first place... It doesn't give any syntax errors... I tried running it for a day and it was still running which felt odd. CREATE PROCEDURE Filter() Begin DECLARE number BIGINT UNSIGNED; DECLARE x INT; DECLARE done INT DEFAULT 0; DECLARE cur1 CURSOR FOR SELECT number FROM TableA; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; CREATE TEMPORARY TABLE IF NOT EXISTS Flags(number bigint unsigned, count int(11)); OPEN cur1; hist_loop: LOOP FETCH cur1 INTO number; SELECT count(*) from TableB WHERE number_s < number AND number_e > number INTO x; IF done = 1 THEN LEAVE hist_loop; END IF; IF x IS NOT NULL AND x>0 THEN INSERT INTO Flags(number, count) VALUES(number, x); END IF; END LOOP hist_loop; CLOSE cur1; END

    Read the article

  • Menu Control in Master Page fails to use CSS styles

    - by Shaun
    I'm working on a web application that uses ASP.NET 3.5 and C#. Structurally, I have a master page with a menu control on it. The control serves as my navigation, and it gets its items from a SiteMapDataSource control and a corresponding Web.sitemap file. The problem is that some styles do not render properly when you specify the CssClass property. More specifically, the selected and hover styles don't respond to css styles. Consider the code below: <%@ Master Language="C#" AutoEventWireup="true" CodeFile="Site.master.cs" Inherits="Site" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.or/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>A webpage</title> </head> <body> <form id="form1" runat="server"> <div id="page"> <asp:Menu ID="navMenu" Orientation="Horizontal" StaticMenuStyle-CssClass="staticMenu" StaticMenuItemStyle-CssClass="staticMenuItem" StaticSelectedStyle-CssClass="staticSelectedItem" StaticHoverStyle-CssClass="staticHoverItem" runat="server"> </asp:Menu> <asp:SiteMapDataSource ID="srcSiteMap" runat="server" ShowStartingNode="false" /> <br /> <asp:ContentPlaceHolder id="ContentPlaceHolder1" runat="server"> </asp:ContentPlaceHolder> </div> </form> </body> </html> Suppose I had a corresponding .css file with the following: .staticMenuItem { background-color:Red; } .staticSelectedItem { background-color:Green; } .staticHoverItem { background-color:Blue; } What will happen is that my item backgrounds will properly be red, but my selected item will not be green and the item I'm hovering my mouse over will not be blue. This seems true regardless of whether or not I include the style in the head of the master page or in an external file in default theme as specified in the web.config file. If I specify the styles in the asp.net xml like so: <asp:Menu ID="navMenu" Orientation="Horizontal" runat="server"> <StaticSelectedStyle BackColor="Green" Font-Underline="True" Font-Bold="True" /> <StaticHoverStyle BackColor="Gray" /> </asp:Menu> It appears to work properly in Firefox, but the style is never embedded in the html in Internet Explorer. Odd. Does anybody have any insight into what is causing this problem and how to neatly work around it? I'm aware I might be able to programmically determine the current page and select the corresponding menu item manually so it receives the proper style class, but before I resort to hacking C# and Javascript together to fix this functionality, I'm open to ideas.

    Read the article

  • About fork system call and global variables

    - by lurks
    I have this program in C++ that forks two new processes: #include <pthread.h> #include <iostream> #include <unistd.h> #include <sys/types.h> #include <sys/wait.h> #include <cstdlib> using namespace std; int shared; void func(){ extern int shared; for (int i=0; i<10;i++) shared++; cout<<"Process "<<getpid()<<", shared " <<shared<<", &shared " <<&shared<<endl; } int main(){ extern int shared; pid_t p1,p2; int status; shared=0; if ((p1=fork())==0) {func();exit(0);}; if ((p2=fork())==0) {func();exit(0);}; for(int i=0;i<10;i++) shared++; waitpid(p1,&status,0); waitpid(p2,&status,0);; cout<<"shared variable is: "<<shared<<endl; cout<<"Process "<<getpid()<<", shared " <<shared<<", &shared " <<&shared<<endl; } The two forked processes make an increment on the shared variables and the parent process does the same. As the variable belongs to the data segment of each process, the final value is 10 because the increment is independent. However, the memory address of the shared variables is the same, you can try compiling and watching the output of the program. How can that be explained ? I cannot understand that, I thought I knew how the fork() works, but this seems very odd.. I need an explanation on why the address is the same, although they are separate variables.

    Read the article

  • JAX-WS wsgen and collections of collections: wsgen broken?

    - by ayang
    I've been playing around with "bottom-up" JAX-WS and have come across something odd when running wsgen. If I have a service class that does something like: @WebService public class Foo { public ArrayList<Bar> getBarList(String baz) { ... } } then running wsgen gets me a FooService_schema1.xsd that has something like this: <xs:complexType name="getBarListResponse"> <xs:sequence> <xs:element name="return" type="tns:bar" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> which seems reasonable. However, if I want a collection of collections like: public BarCollection getBarCollection(String baz) { ... } // BarCollection is just a container for an ArrayList<Bar> then the generated schema ends up with stuff like: <xs:complexType name="barCollection"> <xs:sequence/> </xs:complexType> <xs:complexType name="getBookCollectionsResponse"> <xs:sequence> <xs:element name="return" type="tns:barCollection" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> An empty sequence is not what I had in mind at all. My original approach was to go with: public ArrayList<ArrayList<Bar>> getBarLists(String baz) { ... } but that ends up with a big chain of complexTypes that just wind up with an empty sequence at the end as well: <xs:complexType name="getBarListsResponse"> <xs:sequence> <xs:element name="return" type="tns:arrayList" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <xs:complexType name="arrayList"> <xs:complexContent> <xs:extension base="tns:abstractList"> <xs:sequence/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="abstractList" abstract="true"> <xs:complexContent> <xs:extension base="tns:abstractCollection"> <xs:sequence/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="abstractCollection" abstract="true"> <xs:sequence/> </xs:complexType> Am I missing something or is this a known hole in wsgen? JAXB? Andy

    Read the article

  • Thread.CurrentThread.CurrentUICulture not working consistently

    - by xTRUMANx
    I've been working on a pet project on the weekends to learn more about C# and have encountered an odd problem when working with localization. To be more specific, the problem I have is with System.Threading.Thread.CurrentThread.CurrentUICulture. I've set up my app so that the user can quickly change the language of the app by clicking a menu item. The menu item in turn, saves the two-letter code for the language (e.g. "en", "fr", etc.) in a user setting called 'Language' and then restarts the application. Properties.Settings.Default.Language = "en"; Properties.Settings.Default.Save(); Application.Restart(); When the application is started up, the first line of code in the Form's constructor (even before InitializeComponent()) fetches the Language string from the settings and sets the CurrentUICulture like so: public Form1() { Thread.CurrentThread.CurrentUICulture = new CultureInfo(Properties.Settings.Default.Language); InitializeComponent(); } The thing is, this doesn't work consistently. Sometimes, all works well and the application loads the correct language based on the string saved in the settings file. Other times, it doesn't, and the language remains the same after the application is restarted. At first I thought that I didn't save the language before restarting the application but that is definitely not the case. When the correct language fails to load, if I were to close the application and run it again, the correct language would come up correctly. So this implies that the Language string has been saved but the CurrentUICulture assignment in my form constructor is having no effect sometimes. Any help? Is there something I'm missing of how threading works in C#? This could be machine-specific, so if it makes any difference I'm using Pentium Dual-Core CPU. UPDATE Vlad asked me to check what the CurrentThread's CurrentUICulture is. So I added a MessageBox on my constructor to tell me what the CurrentUICulture two-letter code is as well as the value of my Language user string. MessageBox.Show(string.Format("Current Language: {0}\nCurrent UI Culture: {1}", Properties.Settings.Default.Language, Thread.CurrentThread.CurrentUICulture.TwoLetterISOLanguageName)); When the wrong language is loaded, both the Language string and CurrentUICulture have the wrong language. So I guess the CurrentUICulture has been cleared and my problem is actually with the Language Setting. So I guess the problem is that my application sometimes loads the previously saved language string rather than the last saved language string. If the app is restarted, it will then load the actual saved language string.

    Read the article

  • XML validation in Java - why does this fail?

    - by jd
    hi, first time dealing with xml, so please be patient. the code below is probably evil in a million ways (I'd be very happy to hear about all of them), but the main problem is of course that it doesn't work :-) public class Test { private static final String JSDL_SCHEMA_URL = "http://schemas.ggf.org/jsdl/2005/11/jsdl"; private static final String JSDL_POSIX_APPLICATION_SCHEMA_URL = "http://schemas.ggf.org/jsdl/2005/11/jsdl-posix"; public static void main(String[] args) { System.out.println(Test.createJSDLDescription("/bin/echo", "hello world")); } private static String createJSDLDescription(String execName, String args) { Document jsdlJobDefinitionDocument = getJSDLJobDefinitionDocument(); String xmlString = null; // create the elements Element jobDescription = jsdlJobDefinitionDocument.createElement("JobDescription"); Element application = jsdlJobDefinitionDocument.createElement("Application"); Element posixApplication = jsdlJobDefinitionDocument.createElementNS(JSDL_POSIX_APPLICATION_SCHEMA_URL, "POSIXApplication"); Element executable = jsdlJobDefinitionDocument.createElement("Executable"); executable.setTextContent(execName); Element argument = jsdlJobDefinitionDocument.createElement("Argument"); argument.setTextContent(args); //join them into a tree posixApplication.appendChild(executable); posixApplication.appendChild(argument); application.appendChild(posixApplication); jobDescription.appendChild(application); jsdlJobDefinitionDocument.getDocumentElement().appendChild(jobDescription); DOMSource source = new DOMSource(jsdlJobDefinitionDocument); validateXML(source); try { Transformer transformer = TransformerFactory.newInstance().newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); StreamResult result = new StreamResult(new StringWriter()); transformer.transform(source, result); xmlString = result.getWriter().toString(); } catch (Exception e) { e.printStackTrace(); } return xmlString; } private static Document getJSDLJobDefinitionDocument() { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = null; try { builder = factory.newDocumentBuilder(); } catch (Exception e) { e.printStackTrace(); } DOMImplementation domImpl = builder.getDOMImplementation(); Document theDocument = domImpl.createDocument(JSDL_SCHEMA_URL, "JobDefinition", null); return theDocument; } private static void validateXML(DOMSource source) { try { URL schemaFile = new URL(JSDL_SCHEMA_URL); Sche maFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); Schema schema = schemaFactory.newSchema(schemaFile); Validator validator = schema.newValidator(); DOMResult result = new DOMResult(); validator.validate(source, result); System.out.println("is valid"); } catch (Exception e) { e.printStackTrace(); } } } it spits out a somewhat odd message: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'JobDescription'. One of '{"http://schemas.ggf.org/jsdl/2005/11/jsdl":JobDescription}' is expected. Where am I going wrong here? Thanks a lot

    Read the article

  • Postgresql: Implicit lock acquisition from foreign-key constraint evaluation

    - by fennec
    So, I'm being confused about foreign key constraint handling in Postgresql. (version 8.4.4, for what it's worth). We've got a couple of tables, mildly anonymized below: device: (id, blah, blah, blah, blah, blah x 50)… primary key on id whooooole bunch of other junk device_foo: (id, device_id, left, right) Foreign key (device_id) references device(id) on delete cascade; primary key on id btree index on 'left' and 'right' So I set out with two database windows to run some queries. db1> begin; lock table device in exclusive mode; db2> begin; update device_foo set left = left + 1; The db2 connection blocks. It seems odd to me that an update of the 'left' column on device_stuff should be affected by activity on the device table. But it is. In fact, if I go back to db1: db1> select * from device_stuff for update; *** deadlock occurs *** The pgsql log has the following: blah blah blah deadlock blah. CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."device" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR SHARE OF X: update device_foo set left = left + 1; I suppose I've got two issues: the first is that I don't understand the precise mechanism by which this sort of locking occurs. I have got a couple of useful queries to query pg_locks to see what sort of locks a statement invokes, but I haven't been able to observe this particular sort of locking when I run the update device_foo command in isolation. (Perhaps I'm doing something wrong, though.) I also can't find any documentation on the lock acquisition behavior of foreign-key constraint checks. All I have is a log message. Am I to infer from this that any change to a row will acquire an update lock on all the tables which it's foreign-keyed against? The second issue is that I'd like to find some way to make it not happen like that. I'm ending up with occasional deadlocks in the actual application. I'd like to be able to run big update statements that impact all rows on device_foo without acquiring a big lock on the device table. (There's a lot of access going on in the device table, and it's kind of an expensive lock to get.)

    Read the article

  • Crash in OS X Core Data Utility Tutorial

    - by vinogradov
    I'm trying to follow Apple's Core Data utility Tutorial. It was all going nicely, until... The tutorial uses a custom sub-class of NSManagedObject, called 'Run'. Run.h looks like this: #import <Foundation/Foundation.h> #import <CoreData/CoreData.h> @interface Run : NSManagedObject { NSInteger processID; } @property (retain) NSDate *date; @property (retain) NSDate *primitiveDate; @property NSInteger processID; @end Now, in Run.m we have an accessor method for the processID variable: - (void)setProcessID:(int)newProcessID { [self willChangeValueForKey:@"processID"]; processID = newProcessID; [self didChangeValueForKey:@"processID"]; } In main.m, we use functions to set up a managed object model and context, instantiate an entity called run, and add it to the context. We then get the current NSprocessInfo, in preparation for setting the processID of the run object. NSManagedObjectContext *moc = managedObjectContext(); NSEntityDescription *runEntity = [[mom entitiesByName] objectForKey:@"Run"]; Run *run = [[Run alloc] initWithEntity:runEntity insertIntoManagedObjectContext:moc]; NSProcessInfo *processInfo = [NSProcessInfo processInfo]; Next, we try to call the accessor method defined in Run.m to set the value of processID: [run setProcessID:[processInfo processIdentifier]]; And that's where it's crashing. The object run seems to exist (I can see it in the debugger), so I don't think I'm messaging nil; on the other hand, it doesn't look like the setProcessID: message is actually being received. I'm obviously still learning this stuff (that's what tutorials are for, right?), and I'm probably doing something really stupid. However, any help or suggestions would be gratefully received! ===MORE INFORMATION=== Following up on Jeremy's suggestions: The processID attribute in the model is set up like this: NSAttributeDescription *idAttribute = [[NSAttributeDescription alloc]init]; [idAttribute setName:@"processID"]; [idAttribute setAttributeType:NSInteger32AttributeType]; [idAttribute setOptional:NO]; [idAttribute setDefaultValue:[NSNumber numberWithInteger:-1]]; which seems a little odd; we are defining it as a scalar type, and then giving it an NSNumber object as its default value. In the associated class, Run, processID is defined as an NSInteger. Still, this should be OK - it's all copied directly from the tutorial. It seems to me that the problem is probably in there somewhere. By the way, the getter method for processID is defined like this: - (int)processID { [self willAccessValueForKey:@"processID"]; NSInteger pid = processID; [self didAccessValueForKey:@"processID"]; return pid; } and this method works fine; it accesses and unpacks the default int value of processID (-1). Thanks for the help so far!

    Read the article

  • find window text and save txt to file named that wont work.

    - by blood
    hi, my code wont work and idk why. the point of my code is to find the top window and save a text file with the name the same as the text on the top menu bar (task bar i think?). then save some data to that text file. but everytime i try to use it the write fails if i set the name of the text file before hand so it wont change it will write the data to the file. but if i don't set it before hand it will make the text doc but not write anything to it. or sometimes it will just write numbers for the name (i think it's the handle number) then it will write the data. :\ it's odd can anyone help? #include <iostream> #include <windows.h> #include <fstream> #include <string> #include <sstream> #include <time.h> using namespace std; string header_str = ("NULL"); #define DTTMFMT "%Y-%m-%d %H:%M:%S " #define DTTMSZ 21 char buff[DTTMSZ]; fstream filestr; string ff = ("C:\\System logs\\txst.txt"); TCHAR buf[255]; int main() { GetWindowText(GetForegroundWindow(), buf, 255); stringstream header(stringstream::in | stringstream::out); header.flush(); header << ("C:\\System logs\\"); header << buf; header << (".txt"); header_str = header.str(); ff = header_str; cout << header_str << "\n"; filestr.open (ff.c_str(), fstream::in | fstream::out | fstream::app | ios_base::binary | ios_base::out); filestr << "dfg"; filestr.close(); Sleep(10000); return 0; }

    Read the article

  • Java getInputStreat SocketTimeoutException instead of NoRouteToHostException

    - by Jon
    I have an odd issue happening when trying to open multiple Input Streams (in separate threads) on Linux (RHEL). The behaviour works as expected on windows. I am kicking off 3 threads to open https connections to 3 different servers. All three are invalid IP addresses (in this test case), so I expect an NoRouteToHostException for each of them. The first two return these as expected, and quite quickly. (see stack trace below) However the third (and 4th when I tested it that way) do NOT give a no route exception. They wait for ages, and then give a SocketTimeoutException (see other stack trace below). This takes ages to come back, and does not accurately express the connection issue. The offending line of code is: reader = new BufferedReader(new InputStreamReader(conn.getInputStream())); Has anyone seen something like this before? Are there multi-threading issues with sockets on REHL or some limit somewhere to how many can connect at once...or...something? Expected stack trace, as received for first two: java.net.NoRouteToHostException: No route to host at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:559) at sun.net.NetworkClient.doConnect(NetworkClient.java:158) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:272) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:329) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:172) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:916) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:158) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1177) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:234) Unexpected stack trace, as received on 3rd: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:559) at sun.net.NetworkClient.doConnect(NetworkClient.java:158) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:272) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:329) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:172) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:916) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:158) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1177) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:234)

    Read the article

  • LINQ Query Returning Multiple Copies Of First Result

    - by Mike G
    I'm trying to figure out why a simple query in LINQ is returning odd results. I have a view defined in the database. It basically brings together several other tables and does some data munging. It really isn't anything special except for the fact that it deals with a large data set and can be a bit slow. I want to query this view based on a long. Two sample queries below show different queries to this view. var la = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).ToList(); var deDa = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).Select(p => new { p.AccountNumber, p.SecurityNumber, p.CUSIP }).ToList(); The first one should hand back a List. The second one will be a list of anonymous objects. When I do these queries in entities framework the first one will hand me back a list of results where they're all exactly the same. The second query will hand me back data where the account number is the one that I queried and the other values differ. This seems to do this on a per account number basis, ie if I were to query for one account number or another all the Position objects for one account would have the same value (the first one in the list of Positions for that account) and the second account would have a set of Position objects that all had the same value (again, the first one in it's list of Position objects). I can write SQL that is in effect the same as either of the two EF queries. They both come back with results (say four) that show the correct data, one account number with different securities numbers. Why does this happen??? Is there something that I could be doing wrong so that if I had four results for the first query above that the first record's data also appears in the 2-4th's objects??? I cannot fathom what would/could be causing this. I've searched Google for all kinds of keywords and haven't seen anyone with this issue. We partial class out the Positions class for added functionality (smart object) and some smart properties. There are even some constructors that provide some view model type support. None of this is invoked in the request (I'm 99% sure of this). However, we do this same pattern all over the app. The only thing I can think of is that the mapping in the EDMX is screwy. Is there a way that this would happen if the "primary keys" in the EDMX were not in fact unique given the way the view is constructed? I'm thinking that the dev who imported this model into the EDMX let the designer auto select what would be unique. Any help would give a haggered dev some hope!

    Read the article

  • SWIG & Java Use of carrays.i and array_functions for C Array of Strings

    - by c12
    I have the below configuration where I'm trying to create a test C function that returns a pointer to an Array of Strings and then wrap that using SWIG's carrays.i and array_functions so that I can access the Array elements in Java. Uncertainties: %array_functions(char, SWIGArrayUtility); - not sure if char is correct inline char *getCharArray() - not sure if C function signature is correct String result = getCharArray(); - String return seems odd, but that's what is generated by SWIG SWIG.i: %module Test %{ #include "test.h" %} %include <carrays.i> %array_functions(char, SWIGArrayUtility); %include "test.h" %pragma(java) modulecode=%{ public static char[] getCharArrayImpl() { final int num = numFoo(); char ret[] = new char[num]; String result = getCharArray(); for (int i = 0; i < num; ++i) { ret[i] = SWIGArrayUtility_getitem(result, i); } return ret; } %} Inline Header C Function: #ifndef TEST_H #define TEST_H inline static unsigned short numFoo() { return 3; } inline char *getCharArray(){ static char* foo[3]; foo[0]="ABC"; foo[1]="5CDE"; foo[2]="EEE6"; return foo; } #endif Java Main Tester: public class TestMain { public static void main(String[] args) { System.loadLibrary("TestJni"); char[] test = Test.getCharArrayImpl(); System.out.println("length=" + test.length); for(int i=0; i < test.length; i++){ System.out.println(test[i]); } } } Java Main Tester Output: length=3 ? ? , SWIG Generated Java APIs: public class Test { public static String new_SWIGArrayUtility(int nelements) { return TestJNI.new_SWIGArrayUtility(nelements); } public static void delete_SWIGArrayUtility(String ary) { TestJNI.delete_SWIGArrayUtility(ary); } public static char SWIGArrayUtility_getitem(String ary, int index) { return TestJNI.SWIGArrayUtility_getitem(ary, index); } public static void SWIGArrayUtility_setitem(String ary, int index, char value) { TestJNI.SWIGArrayUtility_setitem(ary, index, value); } public static int numFoo() { return TestJNI.numFoo(); } public static String getCharArray() { return TestJNI.getCharArray(); } public static char[] getCharArrayImpl() { final int num = numFoo(); char ret[] = new char[num]; String result = getCharArray(); System.out.println("result=" + result); for (int i = 0; i < num; ++i) { ret[i] = SWIGArrayUtility_getitem(result, i); System.out.println("ret[" + i + "]=" + ret[i]); } return ret; } }

    Read the article

  • service broker message process order

    - by Blootac
    Everywhere I read says that messages handled by the service broker are processed in the order that they arrive, and yet if you create a table, message type, contract, service etc , and on activation have a stored proc that waits for 2 seconds and inserts the msg into a table, set the max queue readers to 5 or 10, and send 20 odd messages I can see in the table that they are inserted out of order even though when I insert them into the queue and look at the contents of the queue I can see that the messages are all in the right order. Is it due to the delay waitfor waiting for the nearest second and each thread having different subsecond times and then fighting for a lock or something? The reason i've got a delay in there is to simulate delays with joins etc Thanks demo code: --create the table and service broker CREATE TABLE test ( id int identity(1,1), contents varchar(100) ) CREATE MESSAGE TYPE test CREATE CONTRACT mycontract ( test sent by initiator ) GO CREATE PROCEDURE dostuff AS BEGIN DECLARE @msg varchar(100); RECEIVE TOP (1) @msg = message_body FROM myQueue IF @msg IS NOT NULL BEGIN WAITFOR DELAY '00:00:02' INSERT INTO test(contents)values(@msg) END END GO ALTER QUEUE myQueue WITH STATUS = ON, ACTIVATION ( STATUS = ON, PROCEDURE_NAME = dostuff, MAX_QUEUE_READERS = 10, EXECUTE AS SELF ) create service senderService on queue myQueue ( mycontract ) create service receiverService on queue myQueue ( mycontract ) GO --********************************************************** --now insert lots of messages to the queue DECLARE @dialog_handle uniqueidentifier BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>1</test>'); BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>2</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>3</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>4</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>5</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>6</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>7</test>')

    Read the article

  • Information Gain and Entropy

    - by dhorn
    I recently read this question regarding information gain and entropy. I think I have a semi-decent grasp on the main idea, but I'm curious as what to do with situations such as follows: If we have a bag of 7 coins, 1 of which is heavier than the others, and 1 of which is lighter than the others, and we know the heavier coin + the lighter coin is the same as 2 normal coins, what is the information gain associated with picking two random coins and weighing them against each other? Our goal here is to identify the two odd coins. I've been thinking this problem over for a while, and can't frame it correctly in a decision tree, or any other way for that matter. Any help? EDIT: I understand the formula for entropy and the formula for information gain. What I don't understand is how to frame this problem in a decision tree format. EDIT 2: Here is where I'm at so far: Assuming we pick two coins and they both end up weighing the same, we can assume our new chances of picking H+L come out to 1/5 * 1/4 = 1/20 , easy enough. Assuming we pick two coins and the left side is heavier. There are three different cases where this can occur: HM: Which gives us 1/2 chance of picking H and a 1/4 chance of picking L: 1/8 HL: 1/2 chance of picking high, 1/1 chance of picking low: 1/1 ML: 1/2 chance of picking low, 1/4 chance of picking high: 1/8 However, the odds of us picking HM are 1/7 * 5/6 which is 5/42 The odds of us picking HL are 1/7 * 1/6 which is 1/42 And the odds of us picking ML are 1/7 * 5/6 which is 5/42 If we weight the overall probabilities with these odds, we are given: (1/8) * (5/42) + (1/1) * (1/42) + (1/8) * (5/42) = 3/56. The same holds true for option B. option A = 3/56 option B = 3/56 option C = 1/20 However, option C should be weighted heavier because there is a 5/7 * 4/6 chance to pick two mediums. So I'm assuming from here I weight THOSE odds. I am pretty sure I've messed up somewhere along the way, but I think I'm on the right path! EDIT 3: More stuff. Assuming the scale is unbalanced, the odds are (10/11) that only one of the coins is the H or L coin, and (1/11) that both coins are H/L Therefore we can conclude: (10 / 11) * (1/2 * 1/5) and (1 / 11) * (1/2) EDIT 4: Going to go ahead and say that it is a total 4/42 increase.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >