Search Results

Search found 1666 results on 67 pages for 'practical joke'.

Page 56/67 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • I didn't say anything treasonous, quit putting words in my mouth

    - by You guys Lie
    Where at all did I say ANYTHING about organizing some kind of anti-government activity? Nowhere. I do not give a flying fuck about America in case you hadn't realized, I'm talking about what I'm going to do when Jesus Christ returns. Besides, why would I make it easy for them to throw me in a black site prison? Use common sense, sheeple. In the end I guess it doesn't matter anyways, since I do not recognize the government as my ultimate authority. Police/Army/Whatever-- funny outfits and a shiny badge don't make you better than me. Allah will do away with your kind. However, as long as they're with me (as they semi-currently are), we have no problems. I fear the government will have other plans to control the population when things start to further decline, and that is when they will run into problems with me and mine, and probably a large percentage of the public. You'd have to be a fucking fool to think we'd fall into anarchy immediately, given the vast resources and blind loyalty of this country. Saying there are practical limits to free speech for anything I have said in this thread is not only ignorant, it is unpatriotic. I should be being celebrated as the modern day Paul Revere for warning you people about our impending doom. You would do well to study the foundations of this country. Nothing I have said is treasonous at all. Besides, the DoD knows I'm harmless until shit pops off, they have bigger fish to fry right now, go forward. Keep in mind, I don't personally have to do anything to overthrow the government. I just said, I would not advocate for any paramilitary organizations at this time, only if/after we dissolve into (more) tyranny. I am not a terrorist, like some soldiers in Iraq. The machine is doing a great job of ruining itself, while I get to comfortably laugh at it on the nightly news knowing that I'm ready for it to shut down.

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Possible to Inspect Innards of Core C# Functionality

    - by Nick Babcock
    I was struck today, with the inclination to compare the innards of Buffer.BlockCopy and Array.CopyTo. I am curious to see if Array.CopyTo called Buffer.BlockCopy behind the scenes. There is no practical purpose behind this, I just want to further my understanding of the C# language and how it is implemented. Don't jump the gun and accuse me of micro-optimization, but you can accuse me of being curious! When I ran ILasm on mscorlib.dll I received this for Array.CopyTo .method public hidebysig newslot virtual final instance void CopyTo(class System.Array 'array', int32 index) cil managed { // Code size 0 (0x0) } // end of method Array::CopyTo and this for Buffer.BlockCopy .method public hidebysig static void BlockCopy(class System.Array src, int32 srcOffset, class System.Array dst, int32 dstOffset, int32 count) cil managed internalcall { .custom instance void System.Security.SecuritySafeCriticalAttribute::.ctor() = ( 01 00 00 00 ) } // end of method Buffer::BlockCopy Which, frankly, baffles me. I've never run ILasm on a dll/exe I didn't create. Does this mean that I won't be able to see how these functions are implemented? Searching around only revealed a stackoverflow question, which Marc Gravell said [Buffer.BlockCopy] is basically a wrapper over a raw mem-copy While insightful, it doesn't answer my question if Array.CopyTo calls Buffer.BlockCopy. I'm specifically interested in if I'm able to see how these two functions are implemented, and if I had future questions about the internals of C#, if it is possible for me to investigate it. Or am I out of luck?

    Read the article

  • Architecture Guidance Needed?

    - by vijay
    We are about to automate number of process for our reporting team. (The reports are like daily reports, weekly reports, monthly reports, etc..) Mostly the process is like pulling some data from the oracle and then fill them in particular excel template files. Each reports and so their templates are different from each other. Except the excel file manipulation, there are hardly any business logic behind these. Client wanted an integrated tool and all the automated processes are placed as menus/submenus. Right now roughly there are around 30 process waiting to be automated. And we are expecting more new reports in the next quarter. I am nowhere to near having any practical experience when comes to architecuring. Already i have been maintaining two or three systems(they are more than 4yrs old.) for this prestegious client.The possiblity of the above mentioned tool will be manintained for another 3 yrs is very likely. From my past experience i've been through the pain of implmenting change requests to the rigd & undocumented code base resulting in the break down of the system and then eventually myself. So My main and top most concern is the maintainablity. When i was searching for these i came across this link, Smart Clients Using CAB and SCSF is the above link appropriate for my requirement? Also Should i place each automated processes in separate forms under a single project, or place them in separate projects under a single solution.. Please correct me if have missed any other important information. Thx.

    Read the article

  • MVVM: Thin ViewModels and Rich Models

    - by Dan Bryant
    I'm continuing to struggle with the MVVM pattern and, in attempting to create a practical design for a small/medium project, have run into a number of challenges. One of these challenges is figuring out how to get the benefits of decoupling with this pattern without creating a lot of repetitive, hard-to-maintain code. My current strategy has been to create 'rich' Model classes. They are fully aware that they will be consumed by an MVVM pattern and implement INotifyPropertyChanged, allow their collections to be observed and remain cognizant that they may always be under observation. My ViewModel classes tend to be thin, only exposing properties when data actually needs to be transformed, with the bulk of their code being RelayCommand handlers. Views happily bind to either ViewModels or Models directly, depending on whether any data transformation is required. I use AOP (via Postsharp) to ease the pain of INotifyPropertyChanged, making it easy to make all of my Model classes 'rich' in this way. Are there significant disadvantages to using this approach? Can I assume that the ViewModel and View are so tightly coupled that if I need new data transformation for the View, I can simply add it to the ViewModel as needed?

    Read the article

  • Disabling normal link behaviour on click with an if statement in Jquery?

    - by Qwibble
    I've been banging my head with this all day, trying anything and everything I can think of to no gain. I'm no Jquery guru, so am hoping someone here can help me out. What I'm trying to do in pseudo code (concept) seems practical and easy to implement, but obviously not so =D What I have is a navigation sidebar, with sub menu's within some li's, and a link at the head of each li. When a top level link is clicked, I want the jquery to check if the link has a sibling ul. If it does, the link doesn't act like a link, but slides down the sibling sub nav. If there isn't one, the link should act normally. Here's where I'm at currently, what am I doing wrong? $('ul.navigation li a').not('ul.navigation li ul li a').click(function (){ if($(this).parent().contains('ul')){ $('ul.navigation li ul').slideUp(); $(this).siblings('ul').slideDown(); return false; } else{ alert('Link Clicked') // Maybe do some more stuff in here... } }); Any ideas?

    Read the article

  • Is it possible to implement an infinite IEnumerable without using yield with only C# code?

    - by sinelaw
    Edit: Apparently off topic...moving to Programmers.StackExchange.com. This isn't a practical problem, it's more of a riddle. Problem I'm curious to know if there's a way to implement something equivalent to the following, but without using yield: IEnumerable<T> Infinite<T>() { while (true) { yield return default(T); } } Rules You can't use the yield keyword Use only C# itself directly - no IL code, no constructing dynamic assemblies etc. You can only use the basic .NET lib (only mscorlib.dll, System.Core.dll? not sure what else to include). However if you find a solution with some of the other .NET assemblies (WPF?!), I'm also interested. Don't implement IEnumerable or IEnumerator. Notes The closest I've come yet: IEnumerable<int> infinite = null; infinite = new int[1].SelectMany(x => new int[1].Concat(infinite)); This is "correct" but hits a StackOverflowException after 14399 iterations through the enumerable (not quite infinite). I'm thinking there might be no way to do this due to the CLR's lack of tail recursion optimization. A proof would be nice :)

    Read the article

  • Does this mimic perfectly a function template specialization?

    - by zeroes00
    Since the function template in the following code is a member of a class template, it can't be specialized without specializing the enclosing class. But if the compiler's full optimizations are on (assume Visual Studio 2010), will the if-else-statement in the following code get optimized out? And if it does, wouldn't it mean that for all practical purposes this IS a function template specialization without any performance cost? template<typename T> struct Holder { T data; template<int Number> void saveReciprocalOf(); }; template<typename T> template<int Number> void Holder<T>::saveReciprocalOf() { //Will this if-else-statement get completely optimized out if(Number == 0) data = (T)0; else data = (T)1 / Number; } //----------------------------------- void main() { Holder<float> holder; holder.saveReciprocalOf<2>(); cout << holder.data << endl; }

    Read the article

  • OO Design / Patterns - Fat Model Vs Transaction Script?

    - by ben
    Ok, 'Fat' Model and Transaction Script both solve design problems associated with where to keep business logic. I've done some research and popular thought says having all business logic encapsulated within the model is the way to go (mainly since Transaction Script can become really complex and often results in code duplication). However, how does this work if I want to use the TDG of a second Model in my business logic? Surely Transaction Script presents a neater, less coupled solution than using one Model inside the business logic of another? A practical example... I have two classes: User & Alert. When pushing User instances to the database (eg, creating new user accounts), there is a business rule that requires inserting some default Alerts records too (eg, a default 'welcome to the system' message etc). I see two options here: 1) Add this rule as a User method, and in the process create a dependency between User and Alert (or, at least, Alert's Table Data Gateway). 2) Use a Transaction Script, which avoids the dependency between models. (Also, means the business logic is kept in a 'neutral' class & easily accessible by Alert. That probably isn't too important here, though). User takes responsibility for it's own validation etc, however, but because we're talking about a business rule involving two Models, Transaction Script seems like a better choice to me. Anyone spot flaws with this approach?

    Read the article

  • Data Mappers, Models and Images

    - by James
    Hi, I've seen and read plenty of blog posts and forum topics talking about and giving examples of Data Mapper / Model implementations in PHP, but I've not seen any that also deal with saving files/images. I'm currently working on a Zend Framework based project and I'm doing some image manipulation in the model (which is being passed a file path), and then I'm leaving it to the mapper to save that file to the appropriate location - is this common practise? But then, how do you deal with creating say 3 different size images from the one passed in? At the moment I have a "setImage($path_to_tmp_name)" which checks the image type, resizes and then saves back to the original filename. A call to "getImagePath()" then returns the current file path which the data mapper can use and then change with a call to "setImagePath($path)" once it's saved it to the appropriate location, say "/content/my_images". Does this sound practical to you? Also, how would you deal with getting the URL to that image? Do you see that as being something that the model should be providing? It seems to me like that model should worry about where the images are being stored or ultimately how they're accessed through a browser and so I'm inclined to put that in the ini file and just pass the URL prefix to the view through the controller. Does that sound reasonable? I'm using GD for image manipulation - not that that's of any relevance. UPDATE: I've been wondering if the image resizing should be done in the model at all. The model could require that it's provided a "main" image and a "thumb" image, both of certain dimensions. I've thought about creating a "getImageSpecs()" function in the model that would return something that defines the required sizes, then a separate image manipulation class could carry out the resizing and (perhaps in the controller?) and just pass the final paths in to the model using something like "setImagePaths($images)". Any thoughts much appreciated :) James.

    Read the article

  • Algorithm for finding the best routes for food distribution in game

    - by Tautrimas
    Hello, I'm designing a city building game and got into a problem. Imagine Sierra's Caesar III game mechanics: you have many city districts with one market each. There are several granaries over the distance connected with a directed weighted graph. The difference: people (here cars) are units that form traffic jams (here goes the graph weights). Note: in Ceasar game series, people harvested food and stockpiled it in several big granaries, whereas many markets (small shops) took food from the granaries and delivered it to the citizens. The task: tell each district where they should be getting their food from while taking least time and minimizing congestions on the city's roads. Map example Sample diagram Suppose that yellow districts need 7, 7 and 4 apples accordingly. Bluish granaries have 7 and 11 apples accordingly. Suppose edges weights to be proportional to their length. Then, the solution should be something like the gray numbers indicated on the edges. Eg, first district gets 4 apples from the 1st and 3 apples from the 2nd granary, while the last district gets 4 apples from only the 2nd granary. Here, vertical roads are first occupied to the max, and then the remaining workers are sent to the diagonal paths. Question What practical and very fast algorithm should I use? I was looking at some papers (Congestion Games: Optimization in Competition etc.) describing congestion games, but could not get the big picture. Any help is very appreciated! P. S. I can post very little links and no images because of new user restriction.

    Read the article

  • Do the new NoPIA and Type Equivalence features in C#/.NET 4.0 mean Microsoft.mshtml.dll is no longer

    - by jpierson
    I'm maintaining a WPF based application which contains a WinForms based WebBrowser control that based on the IE web browser control. When we deploy, we have had to also supply Microsoft.mshtml.dll and do some custom configuration stuff for our ClickOnce publishing process as well in order to get things to work. I'm curious that with the new NoPIA and Type Equivalence features and dynamic type capabilities in C# 4.0 can we expect that if we upgrade that we can remove the dependencies on the Microsoft.mshtml.dll assembly? If so this will not only reduce the size of our deployment quite a bit but will also simplify our publishing process as well. It is my understanding that we should be able embed the types that normally get automatically generated into extra assemblies for COM types such as the MapPoint Control by Visual Studio. I don't know if this also applies to the Microsoft.mshtml.dll or even how it is done even in the most simple of cases. If somebody could provide an explanation about what the practical impact of these new features are on a project that relies on COM interop and especially the Microsoft.mshtml.dll assembly it would be of great help to me.

    Read the article

  • Is my heuristic algorithm correct? (Sudoku solver)

    - by Aposperite
    First of -yes this IS a homework- but it's primarily a theoretical question rather than a practical one, I am simply asking a confirmation if I am thinking correctly or any hints if I am not. I have been asked to compile a simple Sudoku solver (on Prolog but that is not so important right now) with the only limitation being that it must utilize a heuristic function using Best-First Algorithm. The only heuristic function I have been able to come up with is explained below: 1. Select an empty cell. 1a. If there are no empty cells and there is a solution return solution. Else return No. 2. Find all possible values it can hold. %% It can't take values currently assigned to cells on the same line/column/box. 3. Set to all those values a heuristic number starting from 1. 4. Pick the value whose heuristic number is the lowest && you haven't checked yet. 4a. If there are no more values return no. 5. If a solution is not found: GoTo 1. Else Return Solution. // I am sorry for errors in this "pseudo code." If you want any clarification let me know. So am I doing this right or is there any other way around and mine is false? Thanks in advance.

    Read the article

  • Extension Methods and Application Code

    - by Mystagogue
    I have seen plenty of online guidelines for authoring extension methods, usually along these lines: 1) Avoid authoring extension methods when practical - prefer other approaches first (e.g. regular static methods). 2) Don't author extension methods to extend code you own or currently develop. Instead, author them to extend 3rd party or BCL code. But I have the impression that a couple more guidelines are either implied or advisable. What does the community think of these two additional guidelines: A) Prefer to author extension methods to contain generic functionality rather than application-specific logic. (This seems to follow from guideline #2 above) B) An extension method should be sizeable enough to justify itself (preferably at least 5 lines of code in length). Item (B) is intended to discourage a develoer from writing dozens of extension methods (totalling X lines of code) to refactor or replace what originally was already about X lines of inline code. Perhaps item (B) is badly qualified, or even misinformed about how a one line extension method is actually powerful and justified. I'm curious to know. But if item (B) is somehow dismissed by the community, I must admist I'm still particularly interested in feedback on guideline (A).

    Read the article

  • How to scan an array for certain information

    - by Andrew Martin
    I've been doing an MSc Software Development conversion course, the main language of which is Java, since the end of September. We have our first assessed practical coming and I was hoping for some guidance. We have to create an array that will store 100 integers (all of which are between 1 and 10), which are generated by a random number generator, and then print out ten numbers of this array per line. For the second part, we need to scan these integers, count up how often each number appears and store the results in a second array. I've done the first bit okay, but I'm confused about how to do the second. I have been looking through the scanner class to see if it has any methods which I could use, but I don't see any. Could anyone point me in the right direction - not the answer, but perhaps which library it comes from? Code so far: import java.util.Random; public class Practical4_Assessed { public static void main(String[] args) { Random numberGenerator = new Random (); int[] arrayOfGenerator = new int[100]; for (int countOfGenerator = 0; countOfGenerator < 100; countOfGenerator++) arrayOfGenerator[countOfGenerator] = numberGenerator.nextInt(10); int countOfNumbersOnLine = 0; for (int countOfOutput = 0; countOfOutput < 100; countOfOutput++) { if (countOfNumbersOnLine == 10) { System.out.println(""); countOfNumbersOnLine = 0; countOfOutput--; } else { System.out.print(arrayOfGenerator[countOfOutput] + " "); countOfNumbersOnLine++; } } } } Thanks, Andrew

    Read the article

  • Converting FoxPro Date type to SQL Server 2005 DateTime using SSIS

    - by Avrom
    Hi, When using SSIS in SQL Server 2005 to convert a FoxPro database to a SQL Server database, if the given FoxPro database has a date type, SSIS assumes it is an integer type. The only way to convert it to a dateTime type is to manually select this type. However, that is not practical to do for over 100 tables. Thus, I have been using a workaround in which I use DTS on SQL Server 2000 which converts it to a smallDateTime, then make a backup, then a restore into SQL Server 2005. This workaround is starting to be a little annoying. So, my question is: Is there anyway to setup SSIS so that whenever it encounters a date type to automatically assume it should be converted to a dateTime in SQL Server and apply that rule across the board? Update To be specific, if I use the import/export wizard in SSIS, I get the following error: Column information for the source and the destination data could not be retrieved, or the data types of source columns were not mapped correctly to those available on the destination provider. Followed by a list of a given table's date columns. If I manually set each one to a dateTime, it imports fine. But I do not wish to do this for a hundred tables.

    Read the article

  • Spring - Transaction Readonly

    - by AAK
    Hello Gurus! Just wanted your expert opinions on declarative transaction management for Spring. Here is my setup - A. DAO Layer is Plain old JDBC using jdbcTemplete (No hibernate etc) B. Service Layer is POJO with declarative trasnactions as follows - save*, readonly=false, rollback for Throwable Things work fine with above setup. However when I say get*, readonly=true I see errors in my log file saying - Database connection cannot be marked as readonly. This happens for all get* methods in Service Layer. Now my questions - A. Do I have to say get* as readonly? All my get* methods are pure read DB operations. I do not wish to run them in any transaction context. How serious is the above error? B. When I remove the get* confiiguration, I do not see the errors, morever all my simple get* operations are performed without transactions. Is this the way to go? C. Why would anyone want to have transactional methods where readonly = true? Is there any practical significance of this configuration? Thank you! As always your resposes are much appreciated!

    Read the article

  • Codeplex/Sourceforge for internal use

    - by Josh
    I'm looking for a free/open source collaborative project manager that can be deployed internally in my workplace that would act similar to Codeplex or Sourceforge. Does anyone know of something like this, and if so do you have experience with it. Requirements: Open Source or Free Locally Deployable Has the same types of features found in Sourceforge / Codeplex Issue/Feature Tracking Community Interaction (ie. Voting, Roles, etc.) SCM Integration (Optional) .NET/Windows Friendly (Optional) Every business ends up having internal utilities, and domain specific apps that developers create to make life easier. Given the input of the internal developer community they have the potential to become much better (can you say GMail...), and I would simply like to foster such an environment internally by providing an easy place for that interaction to take place. UPDATE: So I like what I am seeing in both Trac and GForge, but both are heavily geared towards UNIX/Subversion environments. I should have specified this, but we are a MS shop from top to bottom. How practical do you think it is going to be to try and use these in a MS .NET environment? Would that be like trying to shove a square peg through a round hole?

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • Link checker ; how to avoid false positives

    - by Burnzy
    I'm working a on a link checker/broken link finder and I am getting many false positives, after double checking I noticed that many error codes were returning webexceptions but they were actually downloadable, but in some other cases the statuscode is 404 and i can access the page from the browse. So here is the code, its pretty ugly, and id like to have something more, id say practical. All the status codes are in that big if are used to filter the ones i dont want to add to brokenlink because they are valid links ( i tested them all ). What i need to fix is the structure (if possible) and how to not get false 404. Thank you! try { HttpWebRequest request = ( HttpWebRequest ) WebRequest.Create ( uri ); request.Method = "Head"; request.MaximumResponseHeadersLength = 32; // FOR IE SLOW SPEED request.AllowAutoRedirect = true; using ( HttpWebResponse response = ( HttpWebResponse ) request.GetResponse() ) { request.Abort(); } /* WebClient wc = new WebClient(); wc.DownloadString( uri ); */ _validlinks.Add ( strUri ); } catch ( WebException wex ) { if ( !wex.Message.Contains ( "The remote name could not be resolved:" ) && wex.Status != WebExceptionStatus.ServerProtocolViolation ) { if ( wex.Status != WebExceptionStatus.Timeout ) { HttpStatusCode code = ( ( HttpWebResponse ) wex.Response ).StatusCode; if ( code != HttpStatusCode.OK && code != HttpStatusCode.BadRequest && code != HttpStatusCode.Accepted && code != HttpStatusCode.InternalServerError && code != HttpStatusCode.Forbidden && code != HttpStatusCode.Redirect && code != HttpStatusCode.Found ) { _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); } else _brokenlinks.Add ( new Href ( new Uri ( strUri , UriKind.RelativeOrAbsolute ) , UrlType.External ) ); } else _validlinks.Add ( strUri ); }

    Read the article

  • what libraries or platforms should I use to build web apps that provide real-time, asynchronous data

    - by Daniel Sterling
    This is a less a question with a simple, practical answer and more a question to foster discussion on the real-time data exchange topic. I'll begin with an example: Google Wave is, at its core, a real-time asynchronous data synchronization engine. Wave supports (or plans to support) concurrent (real-time) document collaboration, disconnected (offline) document editing, conflict resolution, document history and playback with attribution, and server federation. A core part of Wave is the Operational Transformation engine: http://www.waveprotocol.org/whitepapers/operational-transform The OT engine manages document state. Changes between clients are merged and each client has a sane and consistent view of the document at all times; the final document is eventually consistent between all connected clients. My question is: is this system abstract or general enough to be used as a library or generic framework upon which to build web apps that synchronize real-time, asynchronous state in each client? Is the Wave protocol directly used by any current web applications (besides Google's client)? Would it make sense to directly use it for generic state synchronization in a web app? What other existing libraries or frameworks would you consider using when building such a web app? How much code in such an app might be domain-specific logic vs generic state synchronization logic? Or, put another way, how leaky might the state synchronization abstractions be? Comments and discussion welcomed!

    Read the article

  • is there an equivalent of a trigger for general stored procedure execution on sql server

    - by Arj
    Hi All, Hope you can help. Is there a way to detect when a stored proc is being run on SQL Server without altering the SP itself? Here's the requirement. We need to track users running reports from our enterprise data warehouse as the core product we use doesn't allow for this. Both core product reports and a slew of in-house ones we've added all return their data from individual stored procs. We don't have a practical way of altering the parts of the product webpages where reports are called from. We also can't change the stored procs for the core product reports. (It would be trivial to add a logging line to the start/end of each of our inhouse ones). What I'm trying to find therefore, is whether there's a way in SQL Server (2005 / 2008) to execute a logging stored proc whenever any other stored procedure runs, without altering those stored procedures themselves. We have general control over the SQL Server instance itself as it's local, we just don't want to change the product stored procs themselves. Any one have any ideas? Is there a kind of "stored proc executing trigger"? Is there an event model for SQL Server that we can hook custom .Net code into? (Just to discount it from the start, we want to try and make a change to SQL Server rather than get into capturing the report being run from the products webpages etc) Thoughts appreciated Thanks

    Read the article

  • Yet another Haskell vs. Scala question

    - by Travis Brown
    I've been using Haskell for several months, and I love it—it's gradually become my tool of choice for everything from one-off file renaming scripts to larger XML processing programs. I'm definitely still a beginner, but I'm starting to feel comfortable with the language and the basics of the theory behind it. I'm a lowly graduate student in the humanities, so I'm not under a lot of institutional or administrative pressure to use specific tools for my work. It would be convenient for me in many ways, however, to switch to Scala (or Clojure). Most of the NLP and machine learning libraries that I work with on a daily basis (and that I've written in the past) are Java-based, and the primary project I'm working for uses a Java application server. I've been mostly disappointed by my initial interactions with Scala. Many aspects of the syntax (partial application, for example) still feel clunky to me compared to Haskell, and I miss libraries like Parsec and HXT and QuickCheck. I'm familiar with the advantages of the JVM platform, so practical questions like this one don't really help me. What I'm looking for is a motivational argument for moving to Scala. What does it do (that Haskell doesn't) that's really cool? What makes it fun or challenging or life-changing? Why should I get excited about writing it?

    Read the article

  • Is there a way to trace through only project source in Delphi?

    - by Justin
    I'm using Delphi 2010 and I'm wondering if there's a way to trace through code which is in the project without tracing through calls to included VCLs. For example - you put in a breakpoint and then use Shift+F7 to trace through line-by-line. Now you run into a call to some lengthy procedure in a VCL - in my case it's often a Measurement Studio or other component that draws the doodads for a bunch of I/O, OPC, or other bits. At any rate, what happens is that the debugger hops out of the active source file, opens the component source, and traces through that line by line. Often this is hundreds or thousands of lines of code I don't care about - I just want to have it execute and return to the next source line in MY project. Obviously you can do this by setting breakpoints around every instance of an external call, but often there are too many to make this practical - I'd be spending an hour setting a hundred breakpoints every time I wanted to step through a section of code. Is there a setting or a tool somewhere that can do this? Allow one to trace through code within the project while silently executing code which is external to the project? Thanks in advance, Stack Overflow!

    Read the article

  • Netlogo: error when putting variable in table, only constants allowe??

    - by Chantal
    Hello, Currently I am working on a Netlogo program where I need to use nodes and links for vehicle routing problem. (links are called streets in the program) Here I have some practical problems of how to input variable linkspeed in a table with another node. Constants like 200 etc are fine. Online I found some examples where variables are used, but I do not know why I keep getting the following error: Expected a constant. (or why netlogo expects a constant) Here is the relevant piece of code: extensions [table] streets-own [linkspeed linktoll] nodes-own [netw] ;; In another piece of code linkspeed is assigned successfully to the links to cheapcalc ;; start conditions set costs very high 300000 ;; state 3 unsearched state 2 searching state 1 searched (for later purposes) ask nodes [ set i 0 set j count nodes set netw table:make while [i < j][ table:put netw (i) [3000000 3] set i (i + 1)]] set i 0 let k 0 ask node 35 ;; here i use node 35 as an example. ;; node 35 is connected to node 34, 36, 20 and 50 [table:put netw (35) [0 1] ;; node need to search costs to travel to itself ;; putting constants is ok. while [i < j] [ask my-links [ask both-ends [if (who != 35) [set color blue ;; set temp ([linkspeed] of street 35 who) ;; here my real goal is to put this in stat of i. but i is easier than linkspeed. table:put netw (who) [ i 2 ] ] ] ] set i (i + 1)] ] ;; next node for later, no it is just repetition of the same. end I hope somebody knows what is going on... Kind regards, Chantal

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >