Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 648/763 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • Recursively adding threads to a Java thread pool

    - by Leith
    I am working on a tutorial for my Java concurrency course. The objective is to use thread pools to compute prime numbers in parallel. The design is based on the Sieve of Eratosthenes. It has an array of n bools, where n is the largest integer you are checking, and each element in the array represents one integer. True is prime, false is non prime, and the array is initially all true. A thread pool is used with a fixed number of threads (we are supposed to experiment with the number of threads in the pool and observe the performance). A thread is given a integer multiple to process. The thread then finds the first true element in the array that is not a multiple of thread's integer. The thread then creates a new thread on the thread pool which is given the found number. After a new thread is formed, the existing thread then continues to set all multiples of it's integer in the array to false. The main program thread starts the first thread with the integer '2', and then waits for all spawned threads to finish. It then spits out the prime numbers and the time taken to compute. The issue I have is that the more threads there are in the thread pool, the slower it takes with 1 thread being the fastest. It should be getting faster not slower! All the stuff on the internet about Java thread pools create n worker threads the main thread then wait for all threads to finish. The method I use is recursive as a worker can spawn more worker threads. I would like to know what is going wrong, and if Java thread pools can be used recursively.

    Read the article

  • Bubble Breaker Game Solver better than greedy?

    - by Gregory
    For a mental exercise I decided to try and solve the bubble breaker game found on many cell phones as well as an example here:Bubble Break Game The random (N,M,C) board consists N rows x M columns with C colors The goal is to get the highest score by picking the sequence of bubble groups that ultimately leads to the highest score A bubble group is 2 or more bubbles of the same color that are adjacent to each other in either x or y direction. Diagonals do not count When a group is picked, the bubbles disappear, any holes are filled with bubbles from above first, ie shift down, then any holes are filled by shifting right A bubble group score = n * (n - 1) where n is the number of bubbles in the bubble group The first algorithm is a simple exhaustive recursive algorithm which explores going through the board row by row and column by column picking bubble groups. Once the bubble group is picked, we create a new board and try to solve that board, recursively descending down Some of the ideas I am using include normalized memoization. Once a board is solved we store the board and the best score in a memoization table. I create a prototype in python which shows a (2,15,5) board takes 8859 boards to solve in about 3 seconds. A (3,15,5) board takes 12,384,726 boards in 50 minutes on a server. The solver rate is ~3k-4k boards/sec and gradually decreases as the memoization search takes longer. Memoization table grows to 5,692,482 boards, and hits 6,713,566 times. What other approaches could yield high scores besides the exhaustive search? I don't seen any obvious way to divide and conquer. But trending towards larger and larger bubbles groups seems to be one approach Thanks to David Locke for posting the paper link which talks above a window solver which uses a constant-depth lookahead heuristic.

    Read the article

  • BMP2AVI program in matlab

    - by ariel
    HI I wrote a program that use to work (swear to god) and has stopped from working. this code takes a series of BMPs and convert them into avi file. this is the code: path4avi='C:/FadeOutMask/'; %dont forget the '/' in the end of the path pathOfFrames='C:/FadeOutMask/'; NumberOfFiles=1; NumberOfFrames=10; %1:1:(NumberOfFiles) for i=0:1:(NumberOfFiles-1) FileName=strcat(path4avi,'FadeMaskAsael',int2str(i),'.avi') %the generated file aviobj = avifile(FileName,'compression','None'); aviobj.fps=10; for j=0:1:(NumberOfFrames-1) Frame=strcat(pathOfFrames,'MaskFade',int2str(i*10+j),'.bmp') %not a good name for thedirectory [Fa,map]=imread(Frame); imshow(Fa,map); F=getframe(); aviobj=addframe(aviobj,F) end aviobj=close(aviobj); end And this is the error I get: ??? Error using ==> checkDisplayRange at 22 HIGH must be greater than LOW. Error in ==> imageDisplayValidateParams at 57 common_args.DisplayRange = checkDisplayRange(common_args.DisplayRange,mfilename); Error in ==> imageDisplayParseInputs at 79 common_args = imageDisplayValidateParams(common_args); Error in ==> imshow at 199 [common_args,specific_args] = ... Error in ==> ConverterDosenWorkd at 19 imshow(Fa,map); for some reason I cant put it as code segments. sorry thank you Ariel

    Read the article

  • Using datetime float representation as primary key

    - by devanalyst
    From my experience I have learn that using an surrogate INT data type column as primary key esp. an IDENTITY key column offers better performance than using GUID or char/varchar data type column as primary key. I try to use IDENTITY key as primary key wherever possible. But recently I came across a schema where the tables were horizontally partitioned and were managed via a Partitioned view. So the tables could not have an IDENTITY column since that would make the Partitioned View non updatable. One work around for this was to create a dummy 'keygenerator' table with an identity column to generate IDs for primary key. But this would mean having a 'keygenerator' table for each of the Partitioned View. My next thought was to use float as a primary key. The reason is the following key algorithm that I devised DECLARE @KEY FLOAT SET @KEY = CONVERT(FLOAT,GETDATE())/100000.0 SET @KEY = @EMP_ID + @KEY Heres how it works. CONVERT(FLOAT,GETDATE()) gives float representation of current datetime since internally all datetime are represented by SQL as a float value. CONVERT(FLOAT,GETDATE())/100000.0 converts the float representation into complete decimal value i.e. all digits are pushed to right side of ".". @KEY = @EMP_ID + @KEY adds the Employee ID which is an integer to this decimal value. The logic is that the Employee ID is guaranteed to be unique across sessions since an employee cannot connect to an application more than once at the same time. And for the same employee each time a key will be generated the current datetime will be unique. In all an unique key across all employee sessions and across time. So for Emp Ids 11 and 12, I have key values like 12.40046693321566357, 11.40046693542361111 But my concern whether float data type as primary key offer benefits compared to choosing GUID or char/varchar as primary keys. Also important thing is because of partitioning the float column is going to be part of a composite key.

    Read the article

  • Required Working Precision for the BBP Algorithm?

    - by brainfsck
    Hello, I'm looking to compute the nth digit of Pi in a low-memory environment. As I don't have decimals available to me, this integer-only BBP algorithm in Python has been a great starting point. I only need to calculate one digit of Pi at a time. How can I determine the lowest I can set D, the "number of digits of working precision"? D=4 gives me many correct digits, but a few digits will be off by one. For example, computing digit 393 with precision of 4 gives me 0xafda, from which I extract the digit 0xa. However, the correct digit is 0xb. No matter how high I set D, it seems that testing a sufficient number of digits finds an one where the formula returns an incorrect value. I've tried upping the precision when the digit is "close" to another, e.g. 0x3fff or 0x1000, but cannot find any good definition of "close"; for instance, calculating at digit 9798 gives me 0xcde6 , which is not very close to 0xd000, but the correct digit is 0xd. Can anyone help me figure out how much working precision is needed to calculate a given digit using this algorithm? Thank you,

    Read the article

  • iPhone - User Defaults and UIImages

    - by Staros
    Hello, I've been developing an iPhone app for the last few months. Recently I wanted to up performance and cache a few of the images that are used in the UI. The images are downloaded randomly from the web by the user so I can't add specific images to the project. I'm also already using NSUserDefaults to save other info within the app. So now I'm attempting to save a dictionary of UIImages to my NSUserDefaults object and get... -[UIImage encodeWithCoder:]: unrecognized selector sent to instance I then decided to subclass UIImage with a class named UISaveableImage and implement NSCoding. So now I'm at... @implementation UISaveableImage -(void)encodeWithCoder:(NSCoder *)encoder { [encoder encodeObject:super forKey:@"image"]; } -(id)initWithCoder:(NSCoder *)decoder { if (self=[super init]){ super = [decoder decodeObjectForKey:@"image"]; } return self; } @end which isn't any better than where I started. If I was able to convert an UIImage to NSData I would be good, but all I can find are function like UIImagePNGRepresentation which require me to know what type of image this was. Something that UIImage doesn't allow me to do. Thoughts? I feel like I might have wandered down the wrong path...

    Read the article

  • Flash video slooow in AIR 2 HTMLLoader component

    - by shane
    I am working on a full screen kiosk application in Flex 4/Air 2 using Flash Builder 4. We have a company training website which staff can access via the kiosk, and the main content is interactive flash training videos. Our target machines are by no means 'beefy', they are Atom n270s @ 1.6Ghz with 1Gb RAM. As it stands the videos are all but unusable when used from within the Air application, the application becomes completely unresponsive (100% cpu usage, click events take approx 5-10 seconds to register). So far I have tried: increasing the default frame rate from 24fps to 60. No improvement. nativeWindow.stage.frameRate = 60; running the videos in a stripped down version of my app, just a full screen HTMLLoader component pointed at the training website. No better than before. disabled hyper threading. The Atom CPU is split into two virtual cores, and the AIR app was only able to use one thread so maxed out at 50% CPU usage. Since the kiosk will only run the AIR app I am happy to loose hyper threading to increase the performance of the Air app. Marginal Improvement. The same website with the same videos is responsive if viewed in ie7 on the same machine, although Internet Explorer takes advantage of the CPU’s hyper threading. The flash videos are built with Adobe Captivate and from what I understand employee JavaScript to relay results back to the server. I will add more information about the video content asap as the training guru is back in the office later this week.

    Read the article

  • Cloud-aware programming and help choosing a good framework

    - by Shoaibi
    How can i write a cloud-aware application? e.g. an application that takes benefit of being deployed on cloud. Is it same as an application that runs or a vps/dedicated server? if not then what are the differences? are there any design changes? What are the procedures that i need to take if i am to migrate an application to cloud-aware? Also i am about to implement a web application idea which would need features like security, performance, caching, and more importantly free. I have been comparing some frameworks and found that django has least RAM/CPU usage and works great in prefork+threaded mode, but i have also read that django based sites stop to respond with huge load of connections. Other frameworks that i have seen/know are Zend, CakePHP, Lithium/Cake3, CodeIgnitor, Symfony, Ruby on Rails.... So i would leave this to your opinion as well, suggest me a good free framework based on my needs. Finally thanks for reading the essay ;)

    Read the article

  • How to prepopulate an Outlook MailItem and avoiding a com Exception from the object model guard

    - by torrente
    Hi, I work for a company that develops a CRM tool and offers integration with MS Office(2003 & 2007) from windows XP to 7. (I'm working using Win7) My task is to call an Outlook instance (using C#) from this CRM tool when the user wants to send an email and prepopulate with data of the CRM tool (email, recipient, etc..) All of this already works just fine. The problem I'm having is that Outlook's "object model guard" is throwing com Exception (Operation aborted (Exception from HRESULT: 0x80004004 (E_ABORT))) the moment I try to read a protected value from the mailItem (such as mail.bodyHTML). Example Snippet: using MSOutlook = Microsoft.Office.Interop.Outlook; //untrusted Instance _outlook = new MSOutlook.Application(); MSOutlook.MailItem mail = (MSOutlook.MailItem)_outlook.CreateItem(MSOutlook.OlItemType.olMailItem); //this where the Exception occurs string outlookStdHTMLBody = mail.HTMLBody; I've done quite a bit of reading and know that my Outlook Instance (derived by using new Application) is considered untrusted and therefore the "omg" kicks in. I do have a workaround for development: I'm running VS2010 as Administrator and if I run Outlook as Administrator as well - all is good. I suppose this is due to them having the same integrity levels(high) and the UAC(?) is not complaining. But that just ain't the way to go for deployment. Now the question is: Is there a way to obtain a trusted instance of Outlook so that I can avoid this exception? I've already read that when developing an Office Add-In using VSTO one can obtain a trusted Instance from the OnComplete event and/or using "ThisAddin" But I "merely" want to start an outlook instance and preopulate it, and do not want to develop an Add-In since this is not the requirement. And to make it clear - I have no problem with pop ups informing the user that outlook is being accessed - I just want to get rid of the exception! So how can I get around this problem using code? Any help is highly appreciated! Thomas

    Read the article

  • adding stock data to amibroker using c#

    - by femi
    hello, I have had a hard time getting and answer to this and i would really , really appreciate some help on this. i have been on this for over 2 weeks without headway. i want to use c# to add a line of stock data to amibroker but i just cant find a CLEAR response on how to instantiate it in C#. In VB , I would do it something like; Dim AmiBroker = CreateObject("Broker.Application") sSymbol = ArrayRow(0).ToUpper Stock = AmiBroker.Stocks.Add(sSymbol) iDate = ArrayRow(1).ToLower quote = Stock.Quotations.Add(iDate) quote.Open = CSng(ArrayRow(2)) quote.High = CSng(ArrayRow(3)) quote.Low = CSng(ArrayRow(4)) quote.Close = CSng(ArrayRow(5)) quote.Volume = CLng(ArrayRow(6)) The problem is that CreateObject will not work in C# in this instance. I found the code below somewhere online but i cant seem to understand how to achieve the above. Type objClassType; objClassType = Type.GetTypeFromProgID("Broker.Application"); // Instantiate AmiBroker objApp = Activator.CreateInstance(objClassType); objStocks = objApp.GetType().InvokeMember("Stocks", BindingFlags.GetProperty,null, objApp, null); Can anyone help me here? Thanks

    Read the article

  • error to start Windows Media Encoder

    - by George2
    Hello everyone, I am using the following code snippet to run on Windows Server 2003 x64 edition. I met with the following error when invoking encoder.start method. I am using Windows Media Encoder 9. System.Runtime.InteropServices.COMException 0xC00D1B67 My code snippet is below, does anyone have any ideas what is wrong? IWMEncSourceGroup SrcGrp; IWMEncSourceGroupCollection SrcGrpColl; SrcGrpColl = encoder.SourceGroupCollection; SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1"); IWMEncVideoSource2 SrcVid; IWMEncSource SrcAud; SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO); SrcAud = SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_AUDIO); SrcVid.SetInput("ScreenCap://ScreenCapture1", "", ""); SrcAud.SetInput("Device://Default_Audio_Device", "", ""); // Specify a file object in which to save encoded content. IWMEncFile File = encoder.File; string CurrentFileName = Guid.NewGuid().ToString(); File.LocalFileName = CurrentFileName; CurrentFileName = File.LocalFileName; // Choose a profile from the collection. IWMEncProfileCollection ProColl = encoder.ProfileCollection; IWMEncProfile Pro; for (int i = 0; i < ProColl.Count; i++) { Pro = ProColl.Item(i); if (Pro.Name == "Screen Video/Audio High (CBR)") { SrcGrp.set_Profile(Pro); break; } } encoder.Start(); thanks in advance, George

    Read the article

  • Using a .MDF SQL Server Database with ASP.NET Versus Using SQL Server

    - by Maxim Z.
    I'm currently writing a website in ASP.NET MVC, and my database (which doesn't have any data in it yet, it only has the correct tables) uses SQL Server 2008, which I have installed on my development machine. I connect to the database out of my application by using the Server Explorer, followed by LINQ to SQL mapping. Once I finish developing the site, I will move it over to my hosting service, which is a virtual hosting plan. I'm concerned about whether using the SQL Server setup that is currently working on my development machine will be hard to do on the production server, as I'll have to import all the database tables through the hosting control panel. I've noticed that it is possible to create a SQL Server database from inside Visual Studio. It is then stored in the App_Data directory. My questions are the following: Does it make sense to move my SQL Server DB out of SQL Server and into the App_Data directory as an .mdf file? If so, how can I move it? I believe this is called the Detach command, is it not? Are there any performance/security issues that can occur with a .mdf file like this? Would my intended setup work OK with a typical virtual hosting plan? I'm hoping that the .mdf database won't count against the limited number of SQL Server databases that can be created with my plan. I hope this question isn't too broad. Thanks in advance! Note: I'm just starting out with ASP.NET MVC and all this, so I might be completely misunderstanding how this is supposed to work.

    Read the article

  • Winforms: How to speed up Invalidate()?

    - by Pedery
    I'm developing a retained mode drawing application in GDI+. The application can draw simple shapes to a canvas and perform basic editing. The math that does this is optimized to the last byte and is not an issue. I'm drawing on a panel that is using the built-in Controlstyles.DoubleBuffer. Now, my problem arises if I run my app maximized on a big monitor (HD in my case). If I try to draw a line from one corner of the (big) canvas to the diagonally oposite other, it will start to lag and the CPU goes high up. Each graphical object in my app has a boundingbox. Thus, when I invalidate the boundingbox of a line that goes from one corner of the maximized app to the oposite diagonal one, that boundingbox is virtually as big as the canvas. When a user is drawing a line, this invalidation of the boundingbox thus happens on the mousemove event, and there is a clear lag visible. This lag also exists if the line is the only object on the canvas. I've tried to optimize this in many ways. If I draw a shorter line, the CPU and the lag goes down. If I remove the Invalidate() and keep all other code, the app is quick. If I use a Region (that only spans the figure) to invalidate instead of the boundingbox, it is just as slow. If I split the boundingbox into a range of smaller boxes that lie back to back, thus reducing the invalidation area, no visible performance gain can be seen. Thus I'm at a loss here. How can I speed up the invalidation? On a side note, both Paint.Net and Mspaint suffers from the same shortcommings. Word and PowerPoint however, seem to be able to paint a line as described above with no lag and no CPU load at all. Thus it's possible to achieve the desired results, the question is how?

    Read the article

  • Prefuse: Reloading of XML files

    - by John
    Hello all, I am a new to the prefuse visualization toolkit and have a couple of general questions. For my purpose, I would like to perform an initial visualization using prefuse (graphview / graphml). Once rendered, upon a user click of a node, I would like to completely reload a new xml file for a new visualization. I want to do this in order to allow me to "pre-package" graphs for display. For example. If I search for Ted. I would like to have an xml file relating to Ted load and render a display. Now in the display I see that Ted has nodes associated called Bill and Joe. When I click Joe, I would like to clear the display and load an xml file associated with Joe. And so on. I have looked into loading one very large xml file containing all node and node relationship info and allowing prefuse to handle this using the hops from one level to another. However, eventually I am sure that system performance issues will arise due to the size of data. Thanks in advance for any help, John

    Read the article

  • c# (wcf) architecture file and directory structure (and instantiation)

    - by stevenrosscampbell
    Hello and thanks for any assistance. I have a wcf service that I'm trying to properly modularize. I'm interested in finding out if there is a better way or implementing the file and directory structure along with instanciatation, is there a more appropriate way of abstraction that I may be missing? Is this the best approach? especially if performance and the ability to handle thousands of simultanious request? Currently I have this following structure: -Root\Service.cs public class Service : IService { public void CreateCustomer(Customer customer) { CustomerService customerService = new CustomerService(); customerService.Create(customer); } public void UpdateCustomer(Customer customer) { CustomerService customerService = new CustomerService(); customerService.Update(customer); } } -Root\Customer\CustomerService.cs pulbic class CustomerService { public void Create(Customer customer) { //DO SOMETHING } public void Update(Customer customer) { //DO SOMETHING } public void Delete(int customerId) { //DO SOMETHING } public Customer Retrieve(int customerId) { //DO SOMETHING } } Note: I do not include the Customer Object or the DataAccess libraries in this example as I am only concerned about the service. If you could either let me know what you think, if you know a better way, or a resource that could help out. Thanks Kindly. Steven

    Read the article

  • Hibernate: order multiple one-to-many relations

    - by Markos Fragkakis
    I have a search screen, using JSF, JBoss Seam and Hibernate underneath. There are columns for A, B and C, where the relations are as follows: A (1< -- ) B (1< -- ) C A has a List< B and B has a List< C (both relations are one-to-many). The UI table supports ordering by any column (ASC or DESC), so I want the results of the query to be ordered. This is the reason I used Lists in the model. However, I got an exception that Hibernate cannot eagerly fetch multiple bags (it considers both lists to be bags). There is an interesting blog post here, and they identify the following solutions: Use @IndexColumn annotation (there is none in my DB, and what's more, I want the position of results to be determined by the ordering, not by an index column) Fetch lazily (for performance reasons, I need eager fetching) Change List to Set So, I changed the List to Set, which by the way is more correct, model-wise. First, if don't use @OrderBy, the PersistentSet returned by Hibernate wraps a HashSet, which has no ordering. Second, If I do use @OrderBy, the PersistentSet wraps a LinkedHashSet, which is what I would like, but the OrderBy property is hardcoded, so all other ordering I perform through the UI comes after it. I tried again with Sets, and used SortedSet (and its implementation, TreeSet), but I have some issues: I want ordering to take place in the DB, and not in-memory, which is what TreeSet does (either through a Comparator, or through the Comparable interface of the elements). Second, I found that there is the Hibernate annotation @Sort, which has a SortOrder.UNSORTED and you can also set a Comparator. I still haven't managed to make it compile, but I am still not convinced it is what I need. Any advice?

    Read the article

  • ajax delay load UserControl asp.net

    - by user196202
    regarding ajax delay load of usercontrols (or any controls) on Post at Encosia.com : http://encosia.com/2008/02/05/boost-aspnet-performance-with-deferred-content-loading/ I tried to implement it , but I noticed that it can be done only for simple controls or UserControls that Have simple asp.net controls (or html tags) . But when it involved with advanced dynamic ajax control (like ajaxControlToolkit or Telerik controls) that have javascripts inside them This method of injecting the html code to the .InnerHtml property of div tag (for example) IS NOT WORKING , and I red about it that The browser need to load the script on load and after that it won't iterperate the scripts injectd via .InnerHtml. So I attached here example of delay load project (from encosia.com by dave ward) with my modification (look at DefaultPopup.aspx and beforePopup.aspx and AfterPopup.aspx) Which I modified the RssReader to show listview with popup items (which is implemented via ACT HoverMenuExtender ) So in the regular way the popup items are shown right , but on the delay load which is done by creating virtual page for rendering the html and injecting it to .InnerHtml property – This ISN'T WORKING. So my question is : is there a way to do delay loading for controls which include scripts lik ACT and Telerik and others? And for the ajax templates – if I need to inject advanced control to the page – how I do it with your approach? Thanks very much (I can't attach here files so everyone please ask me by mail ([email protected]) and i'll send it to him. ) Zahi Kramer

    Read the article

  • How do I detect if there is already a similar document stored in Lucene index.

    - by Jenea
    Hi. I need to exclude duplicates in my database. The problem is that duplicates are not considered exact match but rather similar documents. For this purpose I decided to use FuzzyQuery like follows: var fuzzyQuery = new global::Lucene.Net.Search.FuzzyQuery( new Term("text", queryText), 0.8f, 0); hits = _searcher.Search(query); The idea was to set the minimal similarity to 0.8 (that I think is high enough) so only similar documents will be found excluding those that are not sufficiently similar. To test this code I decided to see if it finds already existing document. To the variable queryText was assigned a value that is stored in the index. The code from above found nothing, in other words it doesn't detect even exact match. Index was build by this code: doc.Add(new global::Lucene.Net.Documents.Field( "text", text, global::Lucene.Net.Documents.Field.Store.YES, global::Lucene.Net.Documents.Field.Index.TOKENIZED, global::Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS)); I followed recomendations from bellow and the results are: TermQuery doesn't return any result. Query contructed with var _analyzer = new RussianAnalyzer(); var parser = new global::Lucene.Net.QueryParsers .QueryParser("text", _analyzer); var query = parser.Parse(queryText); var _searcher = new IndexSearcher (Settings.General.Default.LuceneIndexDirectoryPath); var hits = _searcher.Search(query); Returns several results with the maximum score the document that has exact match and other several documents that have similar content.

    Read the article

  • choosing row (from group) with max value in a SQL Server Database

    - by sriehl
    I have a large database and am putting together a report of the data. I have aggregated and summed the data from many tables to get two tables that look like the following. id | code | value id | code | value 13 | AA | 0.5 13 | AC | 2.0 13 | AB | 1.0 14 | AB | 1.5 14 | AA | 2.0 13 | AA | 0.5 15 | AB | 0.5 15 | AB | 3.0 15 | AD | 1.5 15 | AA | 1.0 I need to get a list of id's, with the code (sumed from both tables) with the largest value. 13 | AC 14 | AA 15 | AB There are 4-6 thousand records and it is not possible to change the original tables. I'm not too worried about performance as I only need to run it a few times a year. edit: Let me see if I can explain a bit more clearly, imagine the id is the customer id, the code is who they ordered from and the value is how much they spent there. I need a list of the all the customer id's and the store that customer spent the most money at (and if they spent the same at two different stores, put a value such as 'ZZ' in for the store name).

    Read the article

  • Relay WCF Service

    - by Matt Ruwe
    This is more of an architectural and security question than anything else. I'm trying to determine if a suggested architecture is necessary. Let me explain my configuration. We have a standard DMZ established that essentially has two firewalls. One that's external facing and the other that connects to the internal LAN. The following describes where each application tier is currently running. Outside the firewall: Silverlight Application In the DMZ: WCF Service (Business Logic & Data Access Layer) Inside the LAN: Database I'm receiving input that the architecture is not correct. Specifically, it has been suggested that because "a web server is easily hacked" that we should place a relay server inside the DMZ that communicates with another WCF service inside the LAN which will then communicate with the database. The external firewall is currently configured to only allow port 443 (https) to the WCF service. The internal firewall is configured to allow SQL connections from the WCF service in the DMZ. Ignoring the obvious performance implications, I don't see the security benefit either. I'm going to reserve my judgement of this suggestion to avoid polluting the answers with my bias. Any input is appreciated. Thanks, Matt

    Read the article

  • Passing string with (accidental) escape character loses character even though it's a raw string

    - by Steen
    I have a function with a python doctest that fails because one of the test input strings has a backslash that's treated like an escape character even though I've encoded the string as a raw string. My doctest looks like this: >>> infile = [ "Todo: fix me", "/** todo: fix", "* me", "*/", r"""//\todo stuff to fix""", "TODO fix me too", "toDo bug 4663" ] >>> find_todos( infile ) ['fix me', 'fix', 'stuff to fix', 'fix me too', 'bug 4663'] And the function, which is intended to extract the todo texts from a single line following some variation over a todo specification, looks like this: todos = list() for line in infile: print line if todo_match_obj.search( line ): todos.append( todo_match_obj.search( line ).group( 'todo' ) ) And the regular expression called todo_match_obj is: r"""(?:/{0,2}\**\s?todo):?\s*(?P<todo>.+)""" A quick conversation with my ipython shell gives me: In [35]: print "//\todo" // odo In [36]: print r"""//\todo""" //\todo And, just in case the doctest implementation uses stdout (I haven't checked, sorry): In [37]: sys.stdout.write( r"""//\todo""" ) //\todo My regex-foo is not high by any standards, and I realize that I could be missing something here. EDIT: Following Alex Martellis answer, I would like suggestions on what regular expression would actually match the blasted r"""//\todo fix me""". I know that I did not originally ask for someone to do my homework, and I will accept Alex's answer as it really did answer my question (or confirm my fears). But I promise to upvote any good solutions to my problem here :) I'm using Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) Thank you for reading this far (If you skipped directly down here, I understand)

    Read the article

  • What are the primitive Forth operators?

    - by Barry Brown
    I'm interested in implementing a Forth system, just so I can get some experience building a simple VM and runtime. When starting in Forth, one typically learns about the stack and its operators (DROP, DUP, SWAP, etc.) first, so it's natural to think of these as being among the primitive operators. But they're not. Each of them can be broken down into operators that directly manipulate memory and the stack pointers. Later one learns about store (!) and fetch (@) which can be used to implement DUP, SWAP, and so forth (ha!). So what are the primitive operators? Which ones must be implemented directly in the runtime environment from which all others can be built? I'm not interested in high-performance; I want something that I (and others) can learn from. Operator optimization can come later. (Yes, I'm aware that I can start with a Turing machine and go from there. That's a bit extreme.) Edit: What I'm aiming for is akin to bootstrapping an operating system or a new compiler. What do I need do implement, at minimum, so that I can construct the rest of the system out of those primitive building blocks? I won't implement this on bare hardware; as an educational exercise, I'd write my own minimal VM.

    Read the article

  • Sending Email through Gmail

    - by persistence
    I am writing a program that send an email through GMail but I have serious Operation timeout error. What is the likely cause. class Mailer { MailMessage ms; SmtpClient Sc; public Mailer() { Sc = new SmtpClient("smtp.gmail.com"); //Sc.Credentials = CredentialCache.DefaultNetworkCredentials; Sc.EnableSsl = true; Sc.Port =465; Sc.Timeout = 900000000; Sc.DeliveryMethod = SmtpDeliveryMethod.Network; Sc.UseDefaultCredentials = false; Sc.Credentials = new NetworkCredential("uid", "mypss"); } public void MailTodaysBirthdays(List<Celebrant> TodaysCelebrant) { int i = TodaysCelebrant.Count(); foreach (Celebrant cs in TodaysCelebrant) { //if (IsEmail(cs.EmailAddress.ToString().Trim())) //{ ms = new MailMessage(); ms.To.Add(cs.EmailAddress); ms.From = new MailAddress("uid","Developers",System.Text.Encoding.UTF8); ms.Subject = "Happy Birthday "; String EmailBody = "Happy Birthday " + cs.FirstName; ms.Body = EmailBody; ms.Priority = MailPriority.High; try { Sc.Send(ms); } catch (Exception ex) { Sc.Send(ms); BirthdayServices.LogEvent(ex.Message.ToString(),EventLogEntryType.Error); } //} } } }

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • How Random is System.Guid.NewGuid()? (Take two)

    - by Vilx-
    Before you start marking this as a duplicate, read me out. The other question has a (most likely) incorrect accepted answer. I do not know how .NET generates its GUIDs, probably only Microsoft does, but there's a high chance it simply calls CoCreateGuid(). That function however is documented to be calling UuidCreate(). And the algorithms for creating an UUID are pretty well documented. Long story short, be as it may, it seems that System.Guid.NewGuid() indeed uses version 4 UUID generation algorithm, because all the GUIDs it generates matches the criteria (see for yourself, I tried a couple million GUIDs, they all matched). In other words, these GUIDs are almost random, except for a few known bits. This then again raises the question - how random IS this random? As every good little programmer knows, a pseudo-random number algorithm is only as random as its seed (aka entropy). So what is the seed for UuidCreate()? How ofter is the PRNG re-seeded? Is it cryptographically strong, or can I expect the same GUIDs to start pouring out if two computers accidentally call System.Guid.NewGuid() at the same time? And can the state of the PRNG be guessed if sufficiently many sequentially generated GUIDs are gathered?

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >