Search Results

Search found 14292 results on 572 pages for 'high integrity systems'.

Page 471/572 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • Are web-safe colors still relevant?

    - by Gavin Miller
    Since the vast majority of monitors are 16-bit color or more, including mobile devices, does it make sense to even consider web-safe colors when choosing color schemes? Or is it something that ought to be relegated to history as a piece of trivia? For those of you that don't know what web-safe colors are: Another set of 216 color values is commonly considered to be the "web-safe" color palette, developed at a time when many computer displays were only capable of displaying 256 colors. A set of colors was needed that could be shown without dithering on 256-color displays; the number 216 was chosen partly because computer operating systems customarily reserved sixteen to twenty colors for their own use; it was also selected because it allows exactly six shades each of red, green, and blue (6 × 6 × 6 = 216). The list of colors is often presented as if it has special properties that render them immune to dithering. In fact, on 256-color displays applications can set a palette of any selection of colors that they choose, dithering the rest. These colors were chosen specifically because they matched the palettes selected by the then leading browser applications. [Wikipedia]

    Read the article

  • Vote on Pros and Cons of Java HTML to XML cleaners

    - by George Bailey
    I am looking to allow HTML emails (and other HTML uploads) without letting in scripts and stuff. I plan to have a white list of safe tags and attributes as well as a whitelist of CSS tags and value regexes (to prevent automatic return receipt). I asked a question: Parse a badly formatted XML document (like an HTML file) I found there are many many ways to do this. Some systems have built in sanitizers (which I don't care so much about). This page is a very nice listing page but I get kinda lost http://java-source.net/open-source/html-parsers It is very important that the parsers never throw an exception. There should always be best guess results to the parse/clean. It is also very important that the result is valid XML that can be traversed in Java. I posted some product information and said Community Wiki. Please post any other product suggestions you like and say Community Wiki so they can be voted on. Also any comments or wiki edits on what part of a certain product is better and what is not would be greatly appreciated. (for example,, speed vs accuracy..) It seems that we will go with either jsoup (seems more active and up to date) or TagSoup (compatible with JDK4 and been around awhile). A +1 for any of these products would be if they could convert all style sheets into inline style on the elements.

    Read the article

  • What is the correct connection string for clsql when accessing ms sqlserver using odbc?

    - by nunb
    I am accessing a database on another machine from an OS X server. After setting up freetds through macports and creating the freetds.conf file like so: dump file = /tmp/freetds.log # nunb our Microsoft server [winnt] host = 192.168.0.2 port = 1433 tds version = 8.0 I have the following test commands that work: Test freetds works: tsql -S winnt -U sa 1> use myDB; 2> select count (*) from "sysObjects"; 3> go ODBC is setup through /Applications/Utilities/ODBC\ Administrator.app, with dsn "gmb" using the freeTDS driver and a ServerName of "winnt" -- testing it yields: iodbctest "dsn=gmb;uid=sa;pwd=foo" SQL>select count (*) from "sysObjects"; = 792 Now I run the following code in lisp: (require 'asdf) (setf asdf:*central-registry* nil) (push #P"/Users/way/ff/clbuild/systems/" asdf:*central-registry*) (asdf:oos 'asdf:load-op 'cffi) (asdf:oos 'asdf:load-op 'clsql) (asdf:operate 'asdf:test-op 'cffi) (asdf:oos 'asdf:load-op 'clsql-odbc) (asdf:oos 'asdf:test-op 'clsql-odbc) (in-package :clsql-user) (connect '("gmb" "sa" "foo") :database-type :odbc) This drops me in the debugger with the error: debugger invoked on a SQL-DATABASE-ERROR in thread #<THREAD "initial thread" RUNNING {1194EA31}>: A database error occurred: NIL / IM002 [iODBC][Driver Manager]Data source name not found and no default driver specified. Driver could not be loaded Type HELP for debugger help, or (SB-EXT:QUIT) to exit from SBCL

    Read the article

  • Bubble Breaker Game Solver better than greedy?

    - by Gregory
    For a mental exercise I decided to try and solve the bubble breaker game found on many cell phones as well as an example here:Bubble Break Game The random (N,M,C) board consists N rows x M columns with C colors The goal is to get the highest score by picking the sequence of bubble groups that ultimately leads to the highest score A bubble group is 2 or more bubbles of the same color that are adjacent to each other in either x or y direction. Diagonals do not count When a group is picked, the bubbles disappear, any holes are filled with bubbles from above first, ie shift down, then any holes are filled by shifting right A bubble group score = n * (n - 1) where n is the number of bubbles in the bubble group The first algorithm is a simple exhaustive recursive algorithm which explores going through the board row by row and column by column picking bubble groups. Once the bubble group is picked, we create a new board and try to solve that board, recursively descending down Some of the ideas I am using include normalized memoization. Once a board is solved we store the board and the best score in a memoization table. I create a prototype in python which shows a (2,15,5) board takes 8859 boards to solve in about 3 seconds. A (3,15,5) board takes 12,384,726 boards in 50 minutes on a server. The solver rate is ~3k-4k boards/sec and gradually decreases as the memoization search takes longer. Memoization table grows to 5,692,482 boards, and hits 6,713,566 times. What other approaches could yield high scores besides the exhaustive search? I don't seen any obvious way to divide and conquer. But trending towards larger and larger bubbles groups seems to be one approach Thanks to David Locke for posting the paper link which talks above a window solver which uses a constant-depth lookahead heuristic.

    Read the article

  • BMP2AVI program in matlab

    - by ariel
    HI I wrote a program that use to work (swear to god) and has stopped from working. this code takes a series of BMPs and convert them into avi file. this is the code: path4avi='C:/FadeOutMask/'; %dont forget the '/' in the end of the path pathOfFrames='C:/FadeOutMask/'; NumberOfFiles=1; NumberOfFrames=10; %1:1:(NumberOfFiles) for i=0:1:(NumberOfFiles-1) FileName=strcat(path4avi,'FadeMaskAsael',int2str(i),'.avi') %the generated file aviobj = avifile(FileName,'compression','None'); aviobj.fps=10; for j=0:1:(NumberOfFrames-1) Frame=strcat(pathOfFrames,'MaskFade',int2str(i*10+j),'.bmp') %not a good name for thedirectory [Fa,map]=imread(Frame); imshow(Fa,map); F=getframe(); aviobj=addframe(aviobj,F) end aviobj=close(aviobj); end And this is the error I get: ??? Error using ==> checkDisplayRange at 22 HIGH must be greater than LOW. Error in ==> imageDisplayValidateParams at 57 common_args.DisplayRange = checkDisplayRange(common_args.DisplayRange,mfilename); Error in ==> imageDisplayParseInputs at 79 common_args = imageDisplayValidateParams(common_args); Error in ==> imshow at 199 [common_args,specific_args] = ... Error in ==> ConverterDosenWorkd at 19 imshow(Fa,map); for some reason I cant put it as code segments. sorry thank you Ariel

    Read the article

  • Required Working Precision for the BBP Algorithm?

    - by brainfsck
    Hello, I'm looking to compute the nth digit of Pi in a low-memory environment. As I don't have decimals available to me, this integer-only BBP algorithm in Python has been a great starting point. I only need to calculate one digit of Pi at a time. How can I determine the lowest I can set D, the "number of digits of working precision"? D=4 gives me many correct digits, but a few digits will be off by one. For example, computing digit 393 with precision of 4 gives me 0xafda, from which I extract the digit 0xa. However, the correct digit is 0xb. No matter how high I set D, it seems that testing a sufficient number of digits finds an one where the formula returns an incorrect value. I've tried upping the precision when the digit is "close" to another, e.g. 0x3fff or 0x1000, but cannot find any good definition of "close"; for instance, calculating at digit 9798 gives me 0xcde6 , which is not very close to 0xd000, but the correct digit is 0xd. Can anyone help me figure out how much working precision is needed to calculate a given digit using this algorithm? Thank you,

    Read the article

  • adding stock data to amibroker using c#

    - by femi
    hello, I have had a hard time getting and answer to this and i would really , really appreciate some help on this. i have been on this for over 2 weeks without headway. i want to use c# to add a line of stock data to amibroker but i just cant find a CLEAR response on how to instantiate it in C#. In VB , I would do it something like; Dim AmiBroker = CreateObject("Broker.Application") sSymbol = ArrayRow(0).ToUpper Stock = AmiBroker.Stocks.Add(sSymbol) iDate = ArrayRow(1).ToLower quote = Stock.Quotations.Add(iDate) quote.Open = CSng(ArrayRow(2)) quote.High = CSng(ArrayRow(3)) quote.Low = CSng(ArrayRow(4)) quote.Close = CSng(ArrayRow(5)) quote.Volume = CLng(ArrayRow(6)) The problem is that CreateObject will not work in C# in this instance. I found the code below somewhere online but i cant seem to understand how to achieve the above. Type objClassType; objClassType = Type.GetTypeFromProgID("Broker.Application"); // Instantiate AmiBroker objApp = Activator.CreateInstance(objClassType); objStocks = objApp.GetType().InvokeMember("Stocks", BindingFlags.GetProperty,null, objApp, null); Can anyone help me here? Thanks

    Read the article

  • error to start Windows Media Encoder

    - by George2
    Hello everyone, I am using the following code snippet to run on Windows Server 2003 x64 edition. I met with the following error when invoking encoder.start method. I am using Windows Media Encoder 9. System.Runtime.InteropServices.COMException 0xC00D1B67 My code snippet is below, does anyone have any ideas what is wrong? IWMEncSourceGroup SrcGrp; IWMEncSourceGroupCollection SrcGrpColl; SrcGrpColl = encoder.SourceGroupCollection; SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1"); IWMEncVideoSource2 SrcVid; IWMEncSource SrcAud; SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO); SrcAud = SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_AUDIO); SrcVid.SetInput("ScreenCap://ScreenCapture1", "", ""); SrcAud.SetInput("Device://Default_Audio_Device", "", ""); // Specify a file object in which to save encoded content. IWMEncFile File = encoder.File; string CurrentFileName = Guid.NewGuid().ToString(); File.LocalFileName = CurrentFileName; CurrentFileName = File.LocalFileName; // Choose a profile from the collection. IWMEncProfileCollection ProColl = encoder.ProfileCollection; IWMEncProfile Pro; for (int i = 0; i < ProColl.Count; i++) { Pro = ProColl.Item(i); if (Pro.Name == "Screen Video/Audio High (CBR)") { SrcGrp.set_Profile(Pro); break; } } encoder.Start(); thanks in advance, George

    Read the article

  • Winforms: How to speed up Invalidate()?

    - by Pedery
    I'm developing a retained mode drawing application in GDI+. The application can draw simple shapes to a canvas and perform basic editing. The math that does this is optimized to the last byte and is not an issue. I'm drawing on a panel that is using the built-in Controlstyles.DoubleBuffer. Now, my problem arises if I run my app maximized on a big monitor (HD in my case). If I try to draw a line from one corner of the (big) canvas to the diagonally oposite other, it will start to lag and the CPU goes high up. Each graphical object in my app has a boundingbox. Thus, when I invalidate the boundingbox of a line that goes from one corner of the maximized app to the oposite diagonal one, that boundingbox is virtually as big as the canvas. When a user is drawing a line, this invalidation of the boundingbox thus happens on the mousemove event, and there is a clear lag visible. This lag also exists if the line is the only object on the canvas. I've tried to optimize this in many ways. If I draw a shorter line, the CPU and the lag goes down. If I remove the Invalidate() and keep all other code, the app is quick. If I use a Region (that only spans the figure) to invalidate instead of the boundingbox, it is just as slow. If I split the boundingbox into a range of smaller boxes that lie back to back, thus reducing the invalidation area, no visible performance gain can be seen. Thus I'm at a loss here. How can I speed up the invalidation? On a side note, both Paint.Net and Mspaint suffers from the same shortcommings. Word and PowerPoint however, seem to be able to paint a line as described above with no lag and no CPU load at all. Thus it's possible to achieve the desired results, the question is how?

    Read the article

  • What are alternatives to Win32 PulseEvent() function?

    - by Bill
    The documentation for the Win32 API PulseEvent() function (kernel32.dll) states that this function is “… unreliable and should not be used by new applications. Instead, use condition variables”. However, condition variables cannot be used across process boundaries like (named) events can. I have a scenario that is cross-process, cross-runtime (native and managed code) in which a single producer occasionally has something interesting to make known to zero or more consumers. Right now, a well-known named event is used (and set to signaled state) by the producer using this PulseEvent function when it needs to make something known. Zero or more consumers wait on that event (WaitForSingleObject()) and perform an action in response. There is no need for two-way communication in my scenario, and the producer does not need to know if the event has any listeners, nor does it need to know if the event was successfully acted upon. On the other hand, I do not want any consumers to ever miss any events. In other words, the system needs to be perfectly reliable – but the producer does not need to know if that is the case or not. The scenario can be thought of as a “clock ticker” – i.e., the producer provides a semi-regular signal for zero or more consumers to count. And all consumers must have the correct count over any given period of time. No polling by consumers is allowed (performance reasons). The ticker is just a few milliseconds (20 or so, but not perfectly regular). Raymen Chen (The Old New Thing) has a blog post pointing out the “fundamentally flawed” nature of the PulseEvent() function, but I do not see an alternative for my scenario from Chen or the posted comments. Can anyone please suggest one? Please keep in mind that the IPC signal must cross process boundries on the machine, not simply threads. And the solution needs to have high performance in that consumers must be able to act within 10ms of each event.

    Read the article

  • Missing chars in JpGraph

    - by Álvaro G. Vicario
    I have a web site that runs on Windows and uses cp1252 (aka Win-1252) so it can display Spanish characters. The app generates some plots with JpGraph 2.3. These plots use the Tahoma Open Type font family to display text labels. Strings are provided in ANSI (i.e., cp1252) and font files support cp1252 (actually, the *.ttf files were copied from the system's font folder). It's been working fine in several setups from PHP/5.2.6 to PHP/5.3.0. Problems started when I ran the app under PHP/5.3.1: all non-ASCII are replaced by the hollow rectangle that represents missing or unknown chars. JpGraph's documentation is not very precise about how it expects international chars. Apparently, text is handled internally by the imagettftext() function, which expects UTF-8. However, encoding everything as UTF-8 breaks the app in all the systems. Where ANSI used to work fine, I get wrong characters (Ê instead of Ú). Where I got missing chars, now I get a PHP error: Warning: imagettftext(): any2eucjp(): something happen Do you have any clue about what changed in GD2 from PHP/5.3.0 to 5.3.1 that could be affecting the rendering on non-ASCII chars? How am I expected to feed JpGraph with strings in the Win-1252 charset?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • How should bug tracking and help tickets integrate?

    - by Max Schmeling
    I have a little experience with bug tracking systems such as FogBugz where help tickets are issues are (or can be) bugs, and I have some experience using a bug tracking system internally completely separate from a help center system. My question is, in a company with an existing (home-grown) help center system where replacing it is not an option, how should a bug tracking system (probably Mantis) be integrated into the process? Right now help tickets get put in for issues, questions, etc and they get assigned to the appropriate person (PC Tech, Help Desk staff, or if it's an application issue they can't solve in the help desk it gets assigned to a developer). A user can put a request for small modifications or fixes to an application in a help ticket and the developer it gets assigned to will make the change at some point, apply their time to that ticket, and then close the ticket when it goes to production. We don't currently have a bug tracking system, so I'm looking into the best way to integrate one. Should we just take the help tickets and put it into the bug tracking system if it's a bug (or issue or feature request) and then close the ticket if it's not an emergency fix? We probably don't want to expose the bug tracking system to anyone else as they wouldn't know what to put in the help center system and what to put in the bug tracker... right? Any thoughts? Suggestions? Tips? Advice? To-dos? Not to-dos? etc...

    Read the article

  • How do I detect if there is already a similar document stored in Lucene index.

    - by Jenea
    Hi. I need to exclude duplicates in my database. The problem is that duplicates are not considered exact match but rather similar documents. For this purpose I decided to use FuzzyQuery like follows: var fuzzyQuery = new global::Lucene.Net.Search.FuzzyQuery( new Term("text", queryText), 0.8f, 0); hits = _searcher.Search(query); The idea was to set the minimal similarity to 0.8 (that I think is high enough) so only similar documents will be found excluding those that are not sufficiently similar. To test this code I decided to see if it finds already existing document. To the variable queryText was assigned a value that is stored in the index. The code from above found nothing, in other words it doesn't detect even exact match. Index was build by this code: doc.Add(new global::Lucene.Net.Documents.Field( "text", text, global::Lucene.Net.Documents.Field.Store.YES, global::Lucene.Net.Documents.Field.Index.TOKENIZED, global::Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS)); I followed recomendations from bellow and the results are: TermQuery doesn't return any result. Query contructed with var _analyzer = new RussianAnalyzer(); var parser = new global::Lucene.Net.QueryParsers .QueryParser("text", _analyzer); var query = parser.Parse(queryText); var _searcher = new IndexSearcher (Settings.General.Default.LuceneIndexDirectoryPath); var hits = _searcher.Search(query); Returns several results with the maximum score the document that has exact match and other several documents that have similar content.

    Read the article

  • Writing to a log4net FileAppender with multiple threads performance problems

    - by Wayne
    TickZoom is a very high performance app which uses it's own parallelization library and multiple O/S threads for smooth utilization of multi-core computers. The app hits a bottleneck where users need to write information to a LogAppender from separate O/S threads. The FileAppender uses the MinimalLock feature so that each thread can lock and write to the file and then release it for the next thread to write. If MinimalLock gets disabled, log4net reports errors about the file being already locked by another process (thread). A better way for log4net to do this would be to have a single thread that takes care of writing to the FileAppender and any other threads simply add their messages to a queue. In that way, MinimalLock could be disabled to greatly improve performance of logging. Additionally, the application does a lot of CPU intensive work so it will also improve performance to use a separate thread for writing to the file so the CPU never waits on the I/O to complete. So the question is, does log4net already offer this feature? If so, how do you do enable threaded writing to a file? Is there another, more advanced appender, perhaps? If not, then since log4net is already wrapped in the platform, that makes it possible to implement a separate thread and queue for this purpose in the TickZoom code. Sincerely, Wayne

    Read the article

  • Passing string with (accidental) escape character loses character even though it's a raw string

    - by Steen
    I have a function with a python doctest that fails because one of the test input strings has a backslash that's treated like an escape character even though I've encoded the string as a raw string. My doctest looks like this: >>> infile = [ "Todo: fix me", "/** todo: fix", "* me", "*/", r"""//\todo stuff to fix""", "TODO fix me too", "toDo bug 4663" ] >>> find_todos( infile ) ['fix me', 'fix', 'stuff to fix', 'fix me too', 'bug 4663'] And the function, which is intended to extract the todo texts from a single line following some variation over a todo specification, looks like this: todos = list() for line in infile: print line if todo_match_obj.search( line ): todos.append( todo_match_obj.search( line ).group( 'todo' ) ) And the regular expression called todo_match_obj is: r"""(?:/{0,2}\**\s?todo):?\s*(?P<todo>.+)""" A quick conversation with my ipython shell gives me: In [35]: print "//\todo" // odo In [36]: print r"""//\todo""" //\todo And, just in case the doctest implementation uses stdout (I haven't checked, sorry): In [37]: sys.stdout.write( r"""//\todo""" ) //\todo My regex-foo is not high by any standards, and I realize that I could be missing something here. EDIT: Following Alex Martellis answer, I would like suggestions on what regular expression would actually match the blasted r"""//\todo fix me""". I know that I did not originally ask for someone to do my homework, and I will accept Alex's answer as it really did answer my question (or confirm my fears). But I promise to upvote any good solutions to my problem here :) I'm using Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) Thank you for reading this far (If you skipped directly down here, I understand)

    Read the article

  • What are the primitive Forth operators?

    - by Barry Brown
    I'm interested in implementing a Forth system, just so I can get some experience building a simple VM and runtime. When starting in Forth, one typically learns about the stack and its operators (DROP, DUP, SWAP, etc.) first, so it's natural to think of these as being among the primitive operators. But they're not. Each of them can be broken down into operators that directly manipulate memory and the stack pointers. Later one learns about store (!) and fetch (@) which can be used to implement DUP, SWAP, and so forth (ha!). So what are the primitive operators? Which ones must be implemented directly in the runtime environment from which all others can be built? I'm not interested in high-performance; I want something that I (and others) can learn from. Operator optimization can come later. (Yes, I'm aware that I can start with a Turing machine and go from there. That's a bit extreme.) Edit: What I'm aiming for is akin to bootstrapping an operating system or a new compiler. What do I need do implement, at minimum, so that I can construct the rest of the system out of those primitive building blocks? I won't implement this on bare hardware; as an educational exercise, I'd write my own minimal VM.

    Read the article

  • How Random is System.Guid.NewGuid()? (Take two)

    - by Vilx-
    Before you start marking this as a duplicate, read me out. The other question has a (most likely) incorrect accepted answer. I do not know how .NET generates its GUIDs, probably only Microsoft does, but there's a high chance it simply calls CoCreateGuid(). That function however is documented to be calling UuidCreate(). And the algorithms for creating an UUID are pretty well documented. Long story short, be as it may, it seems that System.Guid.NewGuid() indeed uses version 4 UUID generation algorithm, because all the GUIDs it generates matches the criteria (see for yourself, I tried a couple million GUIDs, they all matched). In other words, these GUIDs are almost random, except for a few known bits. This then again raises the question - how random IS this random? As every good little programmer knows, a pseudo-random number algorithm is only as random as its seed (aka entropy). So what is the seed for UuidCreate()? How ofter is the PRNG re-seeded? Is it cryptographically strong, or can I expect the same GUIDs to start pouring out if two computers accidentally call System.Guid.NewGuid() at the same time? And can the state of the PRNG be guessed if sufficiently many sequentially generated GUIDs are gathered?

    Read the article

  • External API function calls AS3 control timeline

    - by giles
    I have function problem using this code (below), the embedded flash movieclip disappears or completely prevents the scrollto.js query to function in DW cs3. Communication between Flash and JavaScript is without problems, it is the call back I can't find to work and more frustratingly, should be simple, as no values are not required. So far, this has been hours of scouring the net without a workable end in sight...ahrr. What is a function for this to work? JavaScript – to call Flash event from HTML button link, placed between head tags function callExternalInterface() var flashMovie = window.document.menu; flashMovie.menu_up(value); menu_up is the string. Does anyone know of workable function for callback?? HTML <div id="btn_up"><a href="#top" name="charDev" id="charDev" onclick="">top</a></div> Pane navigation div that uses Scrollto.js query, and it's this link I need calling back to the embedded "menubtns.swf" (nested in "AS3Menu_javascript.swf") to play 5 frames of this movieclip, via a JS function. Embedded .swf code, using swfobject.js with allowScriptAccess=always <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" name="menu"<br/> width="251" height="251" id="menu"> <param name="movie" value="../~Assets/Flash/AS3Menu_javascript.swf" /> <param name="allowScriptAccess" value="always" /> <param name="movie" value="ExternalInterfaceScript.swf" /> <param name="quality" value="high" /> <object type="application/x-shockwave-flash" data="../~Assets/Flash/AS3Menu_javascript.swf" width="250" height="250"> <p>Alternative content</p> </object> </object> AS3 / Flash import flash.external.ExternalInterface;flash.system.Security.allowDomain(/sourceDomain/); ExternalInterface.addCallBack("menu_up", this, resetmenu); function resetmenu(){ gotoAndPlay:("frame label" / "number") }

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • What programming languages do the top tier Universities teach?

    - by Simucal
    I'm constantly being inundated with articles and people talking about how most of today's Universities are nothing more than Java vocational schools churning out mediocre programmer after mediocre programmer. Our very own Joel Spolsky has his famous article, "The Perils of Java Schools." Similarly, Alan Kay, a famous Computer Scientist (and SO member) has said this in the past: "I fear — as far as I can tell — that most undergraduate degrees in computer science these days are basically Java vocational training." - Alan Kay (link) If the languages being taught by the schools are considered such a contributing factor to the quality of the school's program then I'm curious what languages do the "top-tier" computer science schools teach (MIT, Carnegie Mellon, Stanford, etc)? If the average school is performing so poorly due in large part the languages (or lack of) that they teach then what languages do the supposed "good" cs programs teach that differentiate them? If you can, provide the name of the school you attended, followed by a list of the languages they use throughout their coursework. Edit: Shog-9 asks why I don't get this information directly from the schools websites themselves. I would, but many schools websites don't discuss the languages they use in their class descriptions. Quite a few will say, "using high-level languages we will...", without elaborating on which languages they use. So, we should be able to get a pretty accurate list of languages taught at various well known institutions from the various SO members who have attended at them.

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • Sending Email through Gmail

    - by persistence
    I am writing a program that send an email through GMail but I have serious Operation timeout error. What is the likely cause. class Mailer { MailMessage ms; SmtpClient Sc; public Mailer() { Sc = new SmtpClient("smtp.gmail.com"); //Sc.Credentials = CredentialCache.DefaultNetworkCredentials; Sc.EnableSsl = true; Sc.Port =465; Sc.Timeout = 900000000; Sc.DeliveryMethod = SmtpDeliveryMethod.Network; Sc.UseDefaultCredentials = false; Sc.Credentials = new NetworkCredential("uid", "mypss"); } public void MailTodaysBirthdays(List<Celebrant> TodaysCelebrant) { int i = TodaysCelebrant.Count(); foreach (Celebrant cs in TodaysCelebrant) { //if (IsEmail(cs.EmailAddress.ToString().Trim())) //{ ms = new MailMessage(); ms.To.Add(cs.EmailAddress); ms.From = new MailAddress("uid","Developers",System.Text.Encoding.UTF8); ms.Subject = "Happy Birthday "; String EmailBody = "Happy Birthday " + cs.FirstName; ms.Body = EmailBody; ms.Priority = MailPriority.High; try { Sc.Send(ms); } catch (Exception ex) { Sc.Send(ms); BirthdayServices.LogEvent(ex.Message.ToString(),EventLogEntryType.Error); } //} } } }

    Read the article

  • Source Control Checkin Comments at Top Of Source Files

    - by James Wiseman
    I've noticed a discrepancy with some source files in our system whereby some contain source-control checkin comments, and some do not. These comments are added automatically to the top of the file when it is checked in: * $Log: //vm1/Projects/Morpheus/Sleep.bdy-arc $ -- -- Rev 1.14 Apr 14 2009 15:32:52 John Smith --Fixed bugs 2292 and 2230. This seems to have been quite prevelant in all the compainies with which I have worked, but I must confess that I struggle to see the point. Generally the comments aren't that good, are ofen left by people who have long since departed, and even when they are of a high standard it is difficult to tie them to physical code changes. It also strikes me, that you are physically changing the file that you are checking in. Now, this may not be such a problem with files that will be compiled, but could be a disaster with others, e.g. JavaScript files. So really, my query is what was the motivation in concept behind providing this functionality in the first instance? Does anyone actually find these comments useful? Also, I would be curious to know if this was feature that is commonly supported within Source Control systems. I am aware of it with PVCS, VSS and Subversion (Subversion Keyword Substitution), however I wonder if it is also available in some of the more popular DVCSs. Your help, as always is much appreciated.

    Read the article

  • How to effectively use WorkbookBeforeClose event correctly?

    - by Ahmad
    On a daily basis, a person needs to check that specific workbooks have been correctly updated with Bloomberg and Reuters market data ie. all data has pulled through and that the 'numbers look correct'. In the past, people were not checking the 'numbers' which led to inaccurate uploads to other systems etc. The idea is that 'something' needs to be developed to prevent the use from closing/saving the workbook unless he/she has checked that the updates are correct/accurate. The numbers look correct action is purely an intuitive exercise, thus will not be coded in any way. The simple solution was to prompt users prior to closing the specific workbook to verify that the data has been checked. Using VSTO SE for Excel 2007, an Add-in was created which hooks into the WorkbookBeforeClose event which is initialised in the add-in ThisAddIn_Startup private void wb_BeforeClose(Xl.Workbook wb, ref bool cancel) { //.... snip ... if (list.Contains(wb.Name)) { DailogResult result = MessageBox.Show("some message", "sometitle", MessageBoxButtons.YesNo); if (result != DialogResult.Yes) { cancel = true; // i think this prevents the whole application from closing } } } I have found the following ThisApplication.WorkbookBeforeSave vs ThisWorkbook.Application.WorkbookBeforeSave which recommends that one should use the ThisApplication.WorkbookBeforeClose event which I think is what I am doing since will span all files opened. The issue I have with the approach is that assuming that I have several files open, some of which are in my list, the event prevents Excel from closing all files sequentially. It now requires each file to be closed individually. Am I using the event correctly and is this effective & efficient use of the event? Should I use the Application level event or document level event? Is there a way to prevent the above behaviour? Any other suggestions are welcomed VS 2005 with VSTO SE

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >