Search Results

Search found 7049 results on 282 pages for 'ms nathan'.

Page 127/282 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Click-Once install location from within application

    - by rein
    I'd like to programatically determine the "publish location" (the location on the server which contains the installation) of the click-once application I'm running. I know that the appref-ms file contains this information and I could parse this file to find it but the application has no idea as to the location of the appref-ms file and I can't seem to find a way of determining this location. Does anyone have any ideas how I can easily determine the publish location from within my application?

    Read the article

  • ASP .NET 4.0 How do I Redirect/Override the default CDN path for ScriptManager when EnableCDN=true

    - by plattnum
    I am using the EnableCdn=true in my ScriptManager so that WebResource.axd and ScriptResource.axd are overridden with static links to JS libraries at the MS CDN service as follows: <asp:ScriptManager ID="ScriptManager1" runat="server" EnableCdn="true" /> How do I override the CDN URLs or service so that I can retrieve the scripts over HTTPS from the MS CDN service rather than HTTP to avoid the browser mixed mode message? or for that matter a different or my own CDN service entirely.

    Read the article

  • Why is this code's execution speed so different?

    - by Steve Watkins
    In Internet Explorer 7, this code executes consistently in 47 ms: function updateObjectValues() { $('.objects').html(12345678); // ~500 DIVs } however, this code executes consistently in 157 ms: function updateObjectValues() { $('.objects').html('12345678'); // ~500 DIVs } Passing a number is over 3x faster than a string. Why are these results so dramatically different? And, is there any way to help the performance of the string?

    Read the article

  • MooTools: Fade out element?

    - by JamesBrownIsDead
    I have an Element object that I'm currently calling .hide() on. Instead, I'd like to fade out the opacity of the entire Element (and its children) to 100% (hidden) as a transition effect over maybe 500 ms or 1000 ms. Can Fx.Tween be used for this? Is this possible--does the MooTools framework have an effect like this in its UI library?

    Read the article

  • Sequential (comb) GUIDs for Oracle

    - by Eyvind
    We are in the process of switching from the C# Guid.NewGuid() random-ish guid generator to the sequential guid algorithm suggested in this post. While this seems to work well for MS SQL Server, I am unsure about the implications for Oracle databases, in which we store guids in a raw(16) field. Does anyone have any insight as to whether this algorithm would be good for creating sequential guids for Oracle as well as for MS SQL Server, or if a different variant should be used. Thanks!

    Read the article

  • Click on notification starts activity twice

    - by Karussell
    I'm creating a notification from a service with the following code: NotificationManager notificationManager = (NotificationManager) ctx .getSystemService(Context.NOTIFICATION_SERVICE); CharSequence tickerText = "bla ..."; long when = System.currentTimeMillis(); Notification notification = new Notification(R.drawable.icon, tickerText, when); Intent notificationIntent = new Intent(ctx, SearchActivity.class). putExtra(SearchActivity.INTENT_SOURCE, MyNotificationService.class.getSimpleName()); PendingIntent contentIntent = PendingIntent.getActivity(ctx, 0, notificationIntent, 0); notification.setLatestEventInfo(ctx, ctx.getString(R.string.app_name), tickerText, contentIntent); notification.flags |= Notification.FLAG_AUTO_CANCEL; notificationManager.notify(1, notification); The logs clearly says that the the method startActivity is called twice times: 04-02 23:48:06.923: INFO/ActivityManager(2466): Starting activity: Intent { act=android.intent.action.SEARCH cmp=com.xy/.SearchActivity bnds=[0,520][480,616] (has extras) } 04-02 23:48:06.923: WARN/ActivityManager(2466): startActivity called from non-Activity context; forcing Intent.FLAG_ACTIVITY_NEW_TASK for: Intent { act=android.intent.action.SEARCH cmp=com.xy/.SearchActivity bnds=[0,520][480,616] (has extras) } 04-02 23:48:06.958: INFO/ActivityManager(2466): Starting activity: Intent { act=android.intent.action.SEARCH cmp=com.xy/.SearchActivity bnds=[0,0][480,96] (has extras) } 04-02 23:48:06.958: WARN/ActivityManager(2466): startActivity called from non-Activity context; forcing Intent.FLAG_ACTIVITY_NEW_TASK for: Intent { act=android.intent.action.SEARCH cmp=com.xy/.SearchActivity bnds=[0,0][480,96] (has extras) } 04-02 23:48:07.087: INFO/notification(5028): onStartCmd: received start id 2: Intent { cmp=com.xy/.NotificationService } 04-02 23:48:07.310: INFO/notification(5028): onStartCmd: received start id 3: Intent { cmp=com.xy/.NotificationService } 04-02 23:48:07.392: INFO/ActivityManager(2466): Displayed activity com.xy/.SearchActivity: 462 ms (total 462 ms) 04-02 23:48:07.392: INFO/ActivityManager(2466): Displayed activity com.xy/.SearchActivity: 318 ms (total 318 ms) Why are they started twice? There are two identical questions on stackoverflow: here and here. But they do not explain what the initial issue could be and they do not work for me. E.g. changing to launchMode singleTop is not appropriated for me and it should work without changing launchMode according to the official docs (see Invoking the search dialog). Nevertheless I also tried to add the following flags to notificationIntent Intent.FLAG_ACTIVITY_CLEAR_TOP | PendingIntent.FLAG_UPDATE_CURRENT but the problem remains the same.

    Read the article

  • How to query data from a password protected https website

    - by Addie
    I'd like my application to query a csv file from a secure website. I have no experience with web programming so I'd appreciate detailed instructions. Currently I have the user login to the site, manually query the csv, and have my application load the file locally. I'd like to automate this by having the user enter his login information, authenticating him on the website, and querying the data. The application is written in C# .NET. The url of the site is: https://www2.emidas.com/default.asp. I've tested the following code already and am able to access the file once the user has already authenticated himself and created a manual query. System.Net.WebClient Client = new WebClient(); Stream strm = Client.OpenRead("https://www3.emidas.com/users/<username>/file.csv"); Here is the request sent to the site for authentication. I've angle bracketed the real userid and password. POST /pwdVal.asp HTTP/1.1 Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, application/x-shockwave-flash, */* User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; Tablet PC 2.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; .NET4.0C; .NET4.0E) Content-Type: application/x-www-form-urlencoded Accept-Encoding: gzip, deflate Cookie: ASPSESSIONID<unsure if this data contained password info so removed>; ClientId=<username> Host: www3.emidas.com Content-Length: 36 Connection: Keep-Alive Cache-Control: no-cache Accept-Language: en-US client_id=<username>&password=<password>

    Read the article

  • How to know there is something in the cell or not ?

    - by Harikrishna
    I have table like below in the html file: <table border="1"> <tr> <td>BuyQuantity</td><td>SellQuantity</td> </tr> <tr> <td>15D</td><td>&nbsp;<span style="font-family: &quot;Arial Unicode MS&quot;;"><o:p></o:p></span></p></td> </tr> <tr> <td>&nbsp;<span style="font-family: &quot;Arial Unicode MS&quot;;"><o:p></o:p></span></td><td>38D</td> </tr> <tr> <td>&nbsp;<span style="font-family: &quot;Arial Unicode MS&quot;;"><o:p></o:p></span></td><td>99D</td> </tr> <tr> <td>38D</td><td>&nbsp;<span style="font-family: &quot;Arial Unicode MS&quot;;"><o:p></o:p></span></td> </tr> </table> There are two columns: buy-quantity and sell-quantity.(It is only example but in each html file there is different content in the table.) Now with the content of these two columns I want to decide there is items bought or items sold.Like in the first row there is 15d buy-qauntity then items were bought and in the second row there is 38d sell-qauntity then items were sold and so on. And after deciding there is items-bought or items-sold I want to build one column named buy/sell for this table like below: ![alt text][1] [1]: http://C:\Documents and Settings\Administrator\My Documents\My Pictures\untitled.png How can I do this ?

    Read the article

  • Calculate the SUM of the Column which has Time DataType:

    - by thevan
    I want to calculate the Sum of the Field which has Time DataType. My Table is Below: TableA: TotalTime ------------- 12:18:00 12:18:00 Here I want to sum the two time fields. I tried the below Query SELECT CAST( DATEADD(MS, SUM(DATEDIFF(MS, '00:00:00.000', CONVERT(TIME, TotalTime))), '00:00:00.000' ) AS TOTALTIME) FROM [TableA] But it gives the Output as TOTALTIME ----------------- 00:36:00.0000000 But My Desired Output would be like below: TOTALTIME ----------------- 24:36:00 How to get this Output?

    Read the article

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

  • Best ASP.Net Host for Developers [closed]

    - by Tyler
    I would need it to allow me to host subdomains and multiple domains is a huge plus. Required: ASP.NET 2.0, 3.0, 3.5 Subdomain Hosting MS-SQL & MySQL Databases Want Multiple Domain Hosting ASP.NET 4.0 Ability to directly connect to MS SQL using SQL SMS

    Read the article

  • Eclipse Debug Mode disrupting MSSQL Server 2005 Stored Procedure access

    - by Sathish
    We have a strange problem in our team. When a developer is using Eclipse in Debug mode, MS SQL Server 2005 blocks other developers from accessing a stored procedure. Debug session typically involves opening Hibernate session to persist an entity which could be accessing a stored procedure used for Primary key generation. Debugging is done in business logic code and rarely in JDBC stored procedure call. Is there any way to configure MS SQL server or the stored procedure so that other developers are not blocked?

    Read the article

  • Query performs poorly unless a temp table is used

    - by Paul McLoughlin
    The following query takes about 1 minute to run, and has the following IO statistics: SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN TASK_REQUESTS AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN AND T3.TASK = 'UPDATE_MEM_BAL' GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'TRANSACTIONS'. Scan count 5977, logical reads 7527408, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TASK_REQUESTS'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 58157 ms, elapsed time = 61437 ms. If I instead introduce a temporary table then the query returns quickly and performs less logical reads: CREATE TABLE #MyTable(RGN VARCHAR(20) NOT NULL, CD VARCHAR(20) NOT NULL, PRIMARY KEY([RGN],[CD])); INSERT INTO #MyTable(RGN, CD) SELECT RGN, CD FROM TASK_REQUESTS WHERE TASK='UPDATE_MEM_BAL'; SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN #MyTable AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'Worktable'. Scan count 5974, logical reads 382339, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TRANSACTIONS'. Scan count 4, logical reads 4547, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#MyTable________________________________________________________________000000000013'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 1420 ms, elapsed time = 1515 ms. The interesting thing for me is that the TASK_REQUEST table is a small table (3 rows at present) and statistics are up to date on the table. Any idea why such different execution plans and execution times would be occuring? And ideally how to change things so that I don't need to use the temp table to get decent performance? The only real difference in the execution plans is that the temp table version introduces an index spool (eager spool) operation.

    Read the article

  • How optimize queries with fully qualified names in t-sql?

    - by tomaszs
    Whe I call: select * from Database.dbo.Table where NAME = 'cat' It takes: 200 ms And when I change database to Database in Management Studio and call it without fully qualified name it's much faster: select * from Table where NAME = 'cat' It takes: 17 ms Is there any way to make fully qualified queries faster without changing database?

    Read the article

  • CUDA memory transfer issue

    - by Vaibhav Sundriyal
    I am trying to execute a code which first transfers data from CPU to GPU memory and vice-versa. In spite of increasing the volume of data, the data transfer time remains the same as if no data transfer is actually taking place. I am posting the code. #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ #include <sys/time.h> __global__ void device_volume(float *x_d,float *y_d) { int index = blockIdx.x * blockDim.x + threadIdx.x; } int main(void) { float *x_h,*y_h,*x_d,*y_d,*z_h,*z_d; long long size=9999999; long long nbytes=size*sizeof(float); timeval t1,t2; double et; x_h=(float*)malloc(nbytes); y_h=(float*)malloc(nbytes); z_h=(float*)malloc(nbytes); cudaMalloc((void **)&x_d,size*sizeof(float)); cudaMalloc((void **)&y_d,size*sizeof(float)); cudaMalloc((void **)&z_d,size*sizeof(float)); gettimeofday(&t1,NULL); cudaMemcpy(x_d, x_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(y_d, y_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(z_d, z_h, nbytes, cudaMemcpyHostToDevice); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("\n %ld\t\t%f\t\t",nbytes,et); et=0.0; //printf("%f %d\n",seconds,CLOCKS_PER_SEC); // launch a kernel with a single thread to greet from the device //device_volume<<<1,1>>>(x_d,y_d); gettimeofday(&t1,NULL); cudaMemcpy(x_h, x_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(y_h, y_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(z_h, z_d, nbytes, cudaMemcpyDeviceToHost); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("%f\n",et); cudaFree(x_d); cudaFree(y_d); cudaFree(z_d); return 0; } Can anybody help me with this issue? Thanks

    Read the article

  • Putting indexes in separate filegroup kills our queries

    - by womp
    Can anyone shed some light on this? On our dev boxes, our database resides entirely in the PRIMARY filegroup, and everything works fine. On one of our production servers, recently upgraded from 2005 to 2008, we noticed it was performing slower than it should. On this machine, there are two filegroups - PRIMARY and INDEXES. Both filegroups contain 1 file per logical volume, one logical volume per CPU, (and each logical volume is a RAID 10 of 4 physical disks). We isolated a few queries that were performing fast on the dev boxes and slow (up to 40x slower) on the production machine. Turned out these queries were using the non-clustered indexes that resided in the INDEXES filegroup. Tweaking some of the queries to only use clustered indexes that were in the PRIMARY filegroup dropped their times back to normal. As a final confirmation, we redeployed the same database on the same machine to have everything in PRIMARY, and things went back to normal! Here's the statistics output of one of the queries, run identically on the machine with different filegroup configurations (table names changed to protect the innocent): FAST (everything in PRIMARY filegroup): (3 row(s) affected) Table '0'. Scan count 2, logical reads 14, ... Table '1'. Scan count 0, logical reads 0, ... Table '1'. Scan count 0, logical reads 0, ... Table '2'. Scan count 2, logical reads 7, ... Table '3'. Scan count 2, logical reads 1012, ... Table '4'. Scan count 1, logical reads 3, ... SQL Server Execution Times: CPU time = 437 ms, elapsed time = 445 ms. SLOW (indexes split into their own filegroup): (3 row(s) affected) Table '0'. Scan count 209, logical reads 428, ... Table '1'. Scan count 0, logical reads 0,... Table '2'. Scan count 1021, logical reads 9043,.... Table '3'. Scan count 209, logical reads 105754, .... Table '4'. Scan count 0, logical reads 0, .... Table '5'. Scan count 1, logical reads 695, ... **Table '#46DA8CA9'. Scan count 205, logical reads 205, ...** Table '6'. Scan count 6, logical reads 436, ... Table '7'. Scan count 1, logical reads 12,.... SQL Server Execution Times: CPU time = 17581 ms, elapsed time = 17595 ms. Notice the weird temp table and extra tables involved in the slow query. It seems clear that having a second file group is making SQL Server batty with choosing an execution plan. What the heck is going on?

    Read the article

  • C#/.NET Little Wonders: Interlocked CompareExchange()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Two posts ago, I discussed the Interlocked Add(), Increment(), and Decrement() methods (here) for adding and subtracting values in a thread-safe, lightweight manner.  Then, last post I talked about the Interlocked Read() and Exchange() methods (here) for safely and efficiently reading and setting 32 or 64 bit values (or references).  This week, we’ll round out the discussion by talking about the Interlocked CompareExchange() method and how it can be put to use to exchange a value if the current value is what you expected it to be. Dirty reads can lead to bad results Many of the uses of Interlocked that we’ve explored so far have centered around either reading, setting, or adding values.  But what happens if you want to do something more complex such as setting a value based on the previous value in some manner? Perhaps you were creating an application that reads a current balance, applies a deposit, and then saves the new modified balance, where of course you’d want that to happen atomically.  If you read the balance, then go to save the new balance and between that time the previous balance has already changed, you’ll have an issue!  Think about it, if we read the current balance as $400, and we are applying a new deposit of $50.75, but meanwhile someone else deposits $200 and sets the total to $600, but then we write a total of $450.75 we’ve lost $200! Now, certainly for int and long values we can use Interlocked.Add() to handles these cases, and it works well for that.  But what if we want to work with doubles, for example?  Let’s say we wanted to add the numbers from 0 to 99,999 in parallel.  We could do this by spawning several parallel tasks to continuously add to a total: 1: double total = 0; 2:  3: Parallel.For(0, 10000, next => 4: { 5: total += next; 6: }); Were this run on one thread using a standard for loop, we’d expect an answer of 4,999,950,000 (the sum of all numbers from 0 to 99,999).  But when we run this in parallel as written above, we’ll likely get something far off.  The result of one of my runs, for example, was 1,281,880,740.  That is way off!  If this were banking software we’d be in big trouble with our clients.  So what happened?  The += operator is not atomic, it will read in the current value, add the result, then store it back into the total.  At any point in all of this another thread could read a “dirty” current total and accidentally “skip” our add.   So, to clean this up, we could use a lock to guarantee concurrency: 1: double total = 0.0; 2: object locker = new object(); 3:  4: Parallel.For(0, count, next => 5: { 6: lock (locker) 7: { 8: total += next; 9: } 10: }); Which will give us the correct result of 4,999,950,000.  One thing to note is that locking can be heavy, especially if the operation being locked over is trivial, or the life of the lock is a high percentage of the work being performed concurrently.  In the case above, the lock consumes pretty much all of the time of each parallel task – and the task being locked on is relatively trivial. Now, let me put in a disclaimer here before we go further: For most uses, lock is more than sufficient for your needs, and is often the simplest solution!    So, if lock is sufficient for most needs, why would we ever consider another solution?  The problem with locking is that it can suspend execution of your thread while it waits for the signal that the lock is free.  Moreover, if the operation being locked over is trivial, the lock can add a very high level of overhead.  This is why things like Interlocked.Increment() perform so well, instead of locking just to perform an increment, we perform the increment with an atomic, lockless method. As with all things performance related, it’s important to profile before jumping to the conclusion that you should optimize everything in your path.  If your profiling shows that locking is causing a high level of waiting in your application, then it’s time to consider lighter alternatives such as Interlocked. CompareExchange() – Exchange existing value if equal some value So let’s look at how we could use CompareExchange() to solve our problem above.  The general syntax of CompareExchange() is: T CompareExchange<T>(ref T location, T newValue, T expectedValue) If the value in location == expectedValue, then newValue is exchanged.  Either way, the value in location (before exchange) is returned. Actually, CompareExchange() is not one method, but a family of overloaded methods that can take int, long, float, double, pointers, or references.  It cannot take other value types (that is, can’t CompareExchange() two DateTime instances directly).  Also keep in mind that the version that takes any reference type (the generic overload) only checks for reference equality, it does not call any overridden Equals(). So how does this help us?  Well, we can grab the current total, and exchange the new value if total hasn’t changed.  This would look like this: 1: // grab the snapshot 2: double current = total; 3:  4: // if the total hasn’t changed since I grabbed the snapshot, then 5: // set it to the new total 6: Interlocked.CompareExchange(ref total, current + next, current); So what the code above says is: if the amount in total (1st arg) is the same as the amount in current (3rd arg), then set total to current + next (2nd arg).  This check and exchange pair is atomic (and thus thread-safe). This works if total is the same as our snapshot in current, but the problem, is what happens if they aren’t the same?  Well, we know that in either case we will get the previous value of total (before the exchange), back as a result.  Thus, we can test this against our snapshot to see if it was the value we expected: 1: // if the value returned is != current, then our snapshot must be out of date 2: // which means we didn't (and shouldn't) apply current + next 3: if (Interlocked.CompareExchange(ref total, current + next, current) != current) 4: { 5: // ooops, total was not equal to our snapshot in current, what should we do??? 6: } So what do we do if we fail?  That’s up to you and the problem you are trying to solve.  It’s possible you would decide to abort the whole transaction, or perhaps do a lightweight spin and try again.  Let’s try that: 1: double current = total; 2:  3: // make first attempt... 4: if (Interlocked.CompareExchange(ref total, current + i, current) != current) 5: { 6: // if we fail, go into a spin wait, spin, and try again until succeed 7: var spinner = new SpinWait(); 8:  9: do 10: { 11: spinner.SpinOnce(); 12: current = total; 13: } 14: while (Interlocked.CompareExchange(ref total, current + i, current) != current); 15: } 16:  This is not trivial code, but it illustrates a possible use of CompareExchange().  What we are doing is first checking to see if we succeed on the first try, and if so great!  If not, we create a SpinWait and then repeat the process of SpinOnce(), grab a fresh snapshot, and repeat until CompareExchnage() succeeds.  You may wonder why not a simple do-while here, and the reason it’s more efficient to only create the SpinWait until we absolutely know we need one, for optimal efficiency. Though not as simple (or maintainable) as a simple lock, this will perform better in many situations.  Comparing an unlocked (and wrong) version, a version using lock, and the Interlocked of the code, we get the following average times for multiple iterations of adding the sum of 100,000 numbers: 1: Unlocked money average time: 2.1 ms 2: Locked money average time: 5.1 ms 3: Interlocked money average time: 3 ms So the Interlocked.CompareExchange(), while heavier to code, came in lighter than the lock, offering a good compromise of safety and performance when we need to reduce contention. CompareExchange() - it’s not just for adding stuff… So that was one simple use of CompareExchange() in the context of adding double values -- which meant we couldn’t have used the simpler Interlocked.Add() -- but it has other uses as well. If you think about it, this really works anytime you want to create something new based on a current value without using a full lock.  For example, you could use it to create a simple lazy instantiation implementation.  In this case, we want to set the lazy instance only if the previous value was null: 1: public static class Lazy<T> where T : class, new() 2: { 3: private static T _instance; 4:  5: public static T Instance 6: { 7: get 8: { 9: // if current is null, we need to create new instance 10: if (_instance == null) 11: { 12: // attempt create, it will only set if previous was null 13: Interlocked.CompareExchange(ref _instance, new T(), (T)null); 14: } 15:  16: return _instance; 17: } 18: } 19: } So, if _instance == null, this will create a new T() and attempt to exchange it with _instance.  If _instance is not null, then it does nothing and we discard the new T() we created. This is a way to create lazy instances of a type where we are more concerned about locking overhead than creating an accidental duplicate which is not used.  In fact, the BCL implementation of Lazy<T> offers a similar thread-safety choice for Publication thread safety, where it will not guarantee only one instance was created, but it will guarantee that all readers get the same instance.  Another possible use would be in concurrent collections.  Let’s say, for example, that you are creating your own brand new super stack that uses a linked list paradigm and is “lock free”.  We could use Interlocked.CompareExchange() to be able to do a lockless Push() which could be more efficient in multi-threaded applications where several threads are pushing and popping on the stack concurrently. Yes, there are already concurrent collections in the BCL (in .NET 4.0 as part of the TPL), but it’s a fun exercise!  So let’s assume we have a node like this: 1: public sealed class Node<T> 2: { 3: // the data for this node 4: public T Data { get; set; } 5:  6: // the link to the next instance 7: internal Node<T> Next { get; set; } 8: } Then, perhaps, our stack’s Push() operation might look something like: 1: public sealed class SuperStack<T> 2: { 3: private volatile T _head; 4:  5: public void Push(T value) 6: { 7: var newNode = new Node<int> { Data = value, Next = _head }; 8:  9: if (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next) 10: { 11: var spinner = new SpinWait(); 12:  13: do 14: { 15: spinner.SpinOnce(); 16: newNode.Next = _head; 17: } 18: while (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next); 19: } 20: } 21:  22: // ... 23: } Notice a similar paradigm here as with adding our doubles before.  What we are doing is creating the new Node with the data to push, and with a Next value being the original node referenced by _head.  This will create our stack behavior (LIFO – Last In, First Out).  Now, we have to set _head to now refer to the newNode, but we must first make sure it hasn’t changed! So we check to see if _head has the same value we saved in our snapshot as newNode.Next, and if so, we set _head to newNode.  This is all done atomically, and the result is _head’s original value, as long as the original value was what we assumed it was with newNode.Next, then we are good and we set it without a lock!  If not, we SpinWait and try again. Once again, this is much lighter than locking in highly parallelized code with lots of contention.  If I compare the method above with a similar class using lock, I get the following results for pushing 100,000 items: 1: Locked SuperStack average time: 6 ms 2: Interlocked SuperStack average time: 4.5 ms So, once again, we can get more efficient than a lock, though there is the cost of added code complexity.  Fortunately for you, most of the concurrent collection you’d ever need are already created for you in the System.Collections.Concurrent (here) namespace – for more information, see my Little Wonders – The Concurent Collections Part 1 (here), Part 2 (here), and Part 3 (here). Summary We’ve seen before how the Interlocked class can be used to safely and efficiently add, increment, decrement, read, and exchange values in a multi-threaded environment.  In addition to these, Interlocked CompareExchange() can be used to perform more complex logic without the need of a lock when lock contention is a concern. The added efficiency, though, comes at the cost of more complex code.  As such, the standard lock is often sufficient for most thread-safety needs.  But if profiling indicates you spend a lot of time waiting for locks, or if you just need a lock for something simple such as an increment, decrement, read, exchange, etc., then consider using the Interlocked class’s methods to reduce wait. Technorati Tags: C#,CSharp,.NET,Little Wonders,Interlocked,CompareExchange,threading,concurrency

    Read the article

  • Lighttpd not cleanly restarting (address already in use)

    - by NilObject
    When doing a dist-upgrade recently, my lighttpd-1.4.19 install on Ubuntu 8.0.4 has begun failing to restart or reload properly with the /etc/init.d/lighttpd restart command. ~$ sudo /etc/init.d/lighttpd restart * Stopping web server lighttpd ...done. * Starting web server lighttpd 2009-06-13 04:06:36: (network.c.300) can't bind to port: 80 Address already in use ...fail! The same error occurs when I do a reload. The way I get around it is to kill lighttpd and then issue the start command, but it seems like I shouldn't have to do that :) I've looked at my config files, and can't spot any immediate errors. Does anyone have any ideas what can be causing this error? This seems to be the latest version as of writing this question that is available via the apt-get route. My config file is: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load # mod_access, mod_accesslog and mod_alias are loaded by default # all other module should only be loaded if neccesary # - saves some time # - saves memory server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", "mod_fastcgi", "mod_rewrite", "mod_redirect", ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" fastcgi.server = (".php" => (( "bin-path" => "/usr/bin/php5-cgi", "socket" => "/tmp/php.socket" ))) ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "audio/x-wav", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".rss" => "application/rss+xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar" ) include_shell "/usr/share/lighttpd/include-conf-enabled.pl" My /etc/init.d/lighttpd script is (untouched from installation): #!/bin/sh ### BEGIN INIT INFO # Provides: lighttpd # Required-Start: networking # Required-Stop: networking # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start the lighttpd web server. ### END INIT INFO PATH=/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/sbin/lighttpd NAME=lighttpd DESC="web server" PIDFILE=/var/run/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME ENV="env -i LANG=C PATH=/usr/local/bin:/usr/bin:/bin" SSD="/sbin/start-stop-daemon" DAEMON_OPTS="-f /etc/lighttpd/lighttpd.conf" test -x $DAEMON || exit 0 set -e # be sure there is a /var/run/lighttpd, even with tmpfs mkdir -p /var/run/lighttpd > /dev/null 2> /dev/null chown www-data:www-data /var/run/lighttpd chmod 0750 /var/run/lighttpd . /lib/lsb/init-functions case "$1" in start) log_daemon_msg "Starting $DESC" $NAME if ! $ENV $SSD --start --quiet\ --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_OPTS ; then log_end_msg 1 else log_end_msg 0 fi ;; stop) log_daemon_msg "Stopping $DESC" $NAME if $SSD --quiet --stop --oknodo --retry 30\ --pidfile $PIDFILE --exec $DAEMON; then rm -f $PIDFILE log_end_msg 0 else log_end_msg 1 fi ;; reload) log_daemon_msg "Reloading $DESC configuration" $NAME if $SSD --stop --signal 2 --oknodo --retry 30\ --quiet --pidfile $PIDFILE --exec $DAEMON; then if $ENV $SSD --start --quiet \ --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_OPTS ; then log_end_msg 0 else log_end_msg 1 fi else log_end_msg 1 fi ;; restart|force-reload) $0 stop [ -r $PIDFILE ] && while pidof lighttpd |\ grep -q `cat $PIDFILE 2>/dev/null` 2>/dev/null ; do sleep 1; done $0 start ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|reload|force-reload}" >&2 exit 1 ;; esac exit 0

    Read the article

  • How To Skip Commercials in Windows 7 Media Center

    - by DigitalGeekery
    If you use Windows 7 Media Center to record live TV, you’re probably interested in skipping through commercials. After all, a big reason to record programs is to avoid commercials, right? Today we focus on a fairly simple and free way to get you skipping commercials in no time. In Windows 7, the .wtv file format has replaced the dvr-ms file format used in previous versions of Media Center for Recorded TV. The .wtv file format, however, does not work very well with commercial skipping applications.  The Process Our first step will be to convert the recorded .wtv files to the previously used dvr-ms file format. This conversion will be done automatically by WtvWatcher. It’s important to note that this process deletes the original .wtv file after successfully converting to .dvr-ms. Next, we will use DVRMSToolBox with the DTB Addin to handle commercials skipping. This process does not “cut” or remove the commercials from the file. It merely skips the commercials during playback. WtvWatcher Download and install the WTVWatcher (link below). To install WtvWatcher, you’ll need to have Windows Installer 3.1 and .NET Framework 3.5 SP1. If you get the Publisher cannot be verified warning you can go ahead and click Install. We’ve completely tested this app and it contains no malware and runs successfully.   After installing, the WtvWatcher will pop up in the lower right corner of your screen. You will need to set the path to your Recorded TV directory. Click on the button for “Click here to set your recorded TV path…”   The WtvWatcher Preferences window will open…   …and you’ll be prompted to browse for your Recorded TV folder. If you did not change the default location at setup, it will be found at C:\Users\Public\Recorded TV. Click “OK” when finished. Click the “X” to close the Preferences screen. You should now see WtvWatcher begin to convert any existing WTV files.   The process should only take a few minutes per file. Note: If WtvWatcher detects an error during the conversion process, it will not delete the original WTV file.    You will probably want to run WtvWatcher on startup. This will allow WtvWatcher too constantly scan for new .wtv files to convert. There is no setting in the application to run on startup, so you’ll need to copy the WTV icon from your desktop into your Windows start menu “Startup” directory. To do so, click on Start > All Programs, right-click on Startup and click on Open all users. Drag and drop, or cut and paste, the WtvWatcher desktop shortcut into the Startup folder. DVRMSToolBox and DTBAddIn Next, we need to download and install the DVRMSToolBox and the DTBAddIn. These two pieces of software will do the actual commercial skipping. After downloading the DVRMSToolBox zip file, extract it and double-click the setup.exe file.  Click “Next” to begin the installation.   Unless DVRMSToolBox will only be used by Administrator accounts, check the “Modify File Permissions” box. Click “Next.” When you get to the Optional Components window, uncheck Download/Install ShowAnalyzer. We will not be using that application. When the installation is complete, click “Close.”    Next we need to install the DTBAddin. Unzip the download folder and run the appropriate .msi file for your system. It is available in 32 & 64 bit versions. Just double click on the file and take the default options. Click “Finish” when the install is completed. You will then be prompted to restart your computer. After your computer has restarted, open DVRMSToolBox settings by going to Start > All Programs, DVRMSToolBox, and click on DVRMStoMPEGSettings. On the MC Addin tab, make sure that Skip Commercials is checked. It should be by default.   On the Commercial Skip tab, make sure the Auto Skip option is selected. Click “Save.”   If you try to watch recorded TV before the file conversion and commercial indexing process is complete you’ll get the following message pop up in Media Center. If you click Yes, it will start indexing the commercials if WtvWatcher has already converted it to dvr-ms. Now you’re ready to kick back and watch your recorded tv without having to wait through those long commercial breaks. Conclusion The DVRMSToolBox is a powerful and complex application with a multitude of features and utilities. We’ve showed you a quick and easy way to get your Windows Media Center setup to skip commercials. This setup, like virtually all commercial skipping setups, is not perfect. You will occasionally find a commercial that doesn’t get skipped. Need help getting your Windows 7 PC configured for TV? Check out our previous tutorial on setting up live TV in Windows Media Center. Links Download WTV Watcher Download DVRMSToolBox Download DTBAddin Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Increase Skip and Replay Intervals in Windows 7 Media CenterSchedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Add Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • A question about Scala Objects

    - by Randin
    In the example for coding with Json using Databinder Dispatch Nathan uses an Object (Http) without a method, shown here: import dispatch._ import Http._ Http("http://www.fox.com/dollhouse/" >>> System.out ) How is he doing this?

    Read the article

  • New Release of Oracle EPM (Enterprise Performance Management)

    - by Theresa Hickman
    I'm a huge fan of Hyperion products and consider Hyperion to be one of the best acquisitions Oracle has made in terms of applications. So I am really excited to talk about their latest release, Release 11.1.2 of the Oracle EPM System. This is EPM's largest release in 2 years, and it's jam-packed with new modules and features. In terms of brand new products, there are three: 1. Public Sector Planning and Budgeting meets the needs of public sector agencies, higher education, governments, etc. that have complex budget requirements. It supports position or employee-based budgeting and integrates with MS Office and your ERP ledgers to perform commitment control. 2. Hyperion Financial Close Management is a complete financial close solution that orchestrates the entire close process from subledgers and general ledger to financial reporting and disclosure submissions. And of course, it is integrated with GL systems and consolidation systems. I saw a demo of this and it looked pretty slick. They have this unified close calendar that looks like a regular calendar that gives each person participating in the close process a task list. It comes with a Gantt chart that shows the relationships and dependencies among closing tasks. There are dashboards to allow you to track the close progress and completion of tasks as well as perform trend analysis and see how much time is being spent on different activities in the close process. This gives you visibility that you never had before to understand where the bottlenecks are and where improvements could be made. I think what I liked best about this product was that it provides a central place for all participants to communicate their progress. When I worked as an Accountant, we used ad hoc tools, such as spreadsheets, Word documents, emails, and phone calls during the close process. I like the idea of having a central system to track the overall progress as well as automate the entire financial close process. Who knows, maybe Accountants won't have to revolve their lives around the month end close anymore with a tool like this. Those periodic fire drills can become predictable, well managed processes. 3. Disclosure Management is an out-of-the-box, pre-packaged XBRL solution to meet statutory reporting requirements. This product is really going to help companies improve the timeliness of producing financial reports. Reports can be authored using MS Word and Excel and then XBRL instance documents can be produced with its embedded XBRL tags. It even supports footnotes and disclosures of non-financial information. With a product like this, companies no longer have to outsource their XBRL filing; they can bring it back in house to save costs and time. In terms of other enhancements, they have ERP Integrator that provides integration and drill downs from Hyperion products to source systems, such as Oracle E-Business Suite, PeopleSoft, and SAP. No other vendor offers this level of integration. There's also a new product that links Oracle Essbase directly to Hyperion Financial Management for internal financial reporting, and new integrations between Hyperion Financial Management and Oracle's GRC products. They also improved the usability of Oracle Hyperion Planning. They made it much easier for end users to use the system via the web or via MS Excel when submitting plans and budgets. It is also integrated with intelligent approval workflows that are data-driven, user-configurable, and scenario-specific to efficiently streamline the budgeting process. Here's the press release from April 7, 2010. Here's the pre-recorded web cast where you can see the demos. Just register and watch the hour long presentation. And finally, here's the newsletter

    Read the article

  • Secret Agent Man

    - by Bil Simser
    Just a quick one this morning as we all get started in the week. Something that comes into play (sometimes in a big way) is the user agent string your browser gives off. So for example using the User-Agent field in the request header, you can determine what browser the user is running and act accordingly.Internet Explorer 9 modified the UA string slightly so just in case you're looking for it here are the user agent strings for IE9 (in various modes):Internet Explorer 9 Mode: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)Internet Explorer 8 Mode: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 7 Mode: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 9 (Compatibility Mode): Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)A couple of things to note here:This was from a 64-bit Windows 7 client so that might account for the WOW64 in the agent string (I don't have a 32-bit client to test from)Various applications and platforms add to the UA string just like they do in previous IE releases. So for example you can see I have various .NET versions installed as well as Zune. You can take advantage of this by querying the UA string for compatibilities and present options accordingly to the end user.As applications will continue to add and modify this string you'll want to query the string for parts not the entire string. For example if you want to detect if you're coming from IE running  on a Windows Phone 7 just look for "iemobile" in the user agent stringHappy hacking!

    Read the article

  • .NET: Can I use DataContractJsonSerializer to serialize to a JSON associative array?

    - by Cheeso
    When using DataContractJsonSerializer to serialize a dictionary, like so: [CollectionDataContract] public class Clazz : Dictionary<String,String> {} .... var c1 = new Clazz(); c1["Red"] = "Rosso"; c1["Blue"] = "Blu"; c1["Green"] = "Verde"; Serializing c1 with this code: var dcjs = new DataContractJsonSerializer(c1.GetType()); var json = new Func<String>(() => { using (var ms = new System.IO.MemoryStream()) { dcjs.WriteObject(ms, c1); return Encoding.ASCII.GetString(ms.ToArray()); } })(); ...produces this JSON: [{"Key":"Red","Value":"Rosso"}, {"Key":"Blue","Value":"Blu"}, {"Key":"Green","Value":"Verde"}] But, this isn't a Javascript associative array. If I do the corresponding thing in javascript: produce a dictionary and then serialize it, like so: var a = {}; a["Red"] = "Rosso"; a["Blue"] = "Blu"; a["Green"] = "Verde"; // use utility class from http://www.JSON.org/json2.js var json = JSON.stringify(a); The result is: {"Red":"Rosso","Blue":"Blu","Green":"Verde"} How can I get DCJS to produce or consume a serialized string for a dictionary, that is compatible with JSON2.js ? I know about JavaScriptSerializer from ASP.NET. Not sure if it's very WCF friendly. Does it respect DataMember, DataContract attributes?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >