Search Results

Search found 2098 results on 84 pages for 'paths'.

Page 81/84 | < Previous Page | 77 78 79 80 81 82 83 84  | Next Page >

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • certain Smarty tags don't work in OpenX templates

    - by mikez302
    I am on a team that is developing an OpenX plugin, and I am responsible for the UI. I noticed that if I use certain Smarty tags in my template, the app doesn't work and I see an error message, similar to this: Plugin by name 'Html_select_date' was not found in the registry; used paths: default_views_helpers_: /openx/www/admin/plugins/myApp/application/modules/default/views/helpers/ OX_OXP_UI_View_Helper_: /openx/www/admin/plugins/myApp/application/../library/OX/OXP/UI/View/Helper/ OX_UI_View_Helper_: /openx/www/admin/plugins/myApp/application/../library/OX/UI/View/Helper/ Zend_View_Helper_: Zend/View/Helper/ (stack trace) The stack trace looks like this: #0 /openx/www/admin/plugins/myApp/library/Zend/View/Abstract.php(1117): Zend_Loader_PluginLoader-load('Html_select_dat...') #1 /openx/www/admin/plugins/myApp/library/Zend/View/Abstract.php(568): Zend_View_Abstract-_getPlugin('helper', 'html_select_dat...') #2 /openx/www/admin/plugins/myApp/library/OX/UI/Smarty/SmartyWithViewHelper.php(25): Zend_View_Abstract-getHelper('html_select_dat...') #3 /openx/var/templates_compiled/%2Fdefault%2Fviews%2Fscripts%2Findex%2Fview-reports.html^%%E8^E80^E80B56F2%%view-reports.html.php(38): OX_UI_Smarty_SmartyWithViewHelper-callViewHelper('html_select_dat...', Array) #4 /openx/lib/smarty/Smarty.class.php(1274): include('/openx...') #5 /openx/www/admin/plugins/myApp/library/OX/UI/View/SmartyView.php(103): Smarty-fetch('/openx...') #6 /openx/www/admin/plugins/myApp/library/Zend/View/Abstract.php(832): OX_UI_View_SmartyView-_run('/openx...') #7 /openx/www/admin/plugins/myApp/library/OX/UI/View/SmartyView.php(151): Zend_View_Abstract-render('index/view-repo...') #8 /openx/www/admin/plugins/myApp/library/OX/UI/View/Helper/WithViewScript.php(23): OX_UI_View_SmartyView-render('index/view-repo...') #9 /openx/www/admin/plugins/myApp/application/modules/default/views/helpers/ViewReports.php(5): OX_UI_View_Helper_WithViewScript::renderViewScript('index/view-repo...', Array) #10 /openx/www/admin/plugins/myApp/application/modules/default/controllers/IndexController.php(98): Default_Views_Helpers_ViewReports-renderPage() #11 /openx/www/admin/plugins/myApp/library/Zend/Controller/Action.php(512): IndexController-viewReportsAction() #12 /openx/www/admin/plugins/myApp/library/Zend/Controller/Dispatcher/Standard.php(288): Zend_Controller_Action-dispatch('viewReportsActi...') #13 /openx/www/admin/plugins/myApp/library/Zend/Controller/Front.php(945): Zend_Controller_Dispatcher_Standard-dispatch(Object(Zend_Controller_Request_Http), Object(Zend_Controller_Response_Http)) #14 /openx/www/admin/plugins/myApp/application/bootstrap.php(117): Zend_Controller_Front-dispatch() #15 /openx/www/admin/plugins/myApp/public/index.php(7): require('/openx...') #16 {main} This does not happen with all Smarty tags. For example, I can use {if}, {foreach}, or {assign} tags without any problems. But whenever I try to use {html_select_date}, {html_image}, or {html_table}, I get the errors. In case this matters, the programmer who is designing the plugin copied the openXWorkflow plugin and made some changes. I noticed that the openXWorkflow plugin has a file (openx/plugins_repo/openXWorkflow/www/admin/plugins/openXWorkflow/library/OX/UI/Smarty/SmartyCompilerWithViewHelper.php) with a class that overrides the default Smarty compiler, supposedly with the ability to compile shorthands for calling ZF view helpers. That file has a list of Smarty functions, but the list is incomplete. If I add the functions to the list, or simply delete the file, my template works fine, but I don't like to change library files. It may make the app hard to maintain, and I don't know if it will mess up something else. The file has the comment "There is no easy access to the list of Smarty's built-in functions so we need to list them here. HTML-specific functions are not included as we cover HTML generation separately.", so it seems like certain Smarty functions may be disabled on purpose for some reason. Will anything bad happen if I try to use them? If, for example, I want to use the {html_select_date} tag in my template, how would I go about doing that? Keep in mind that much of this stuff is new and unfamiliar to me. This is my first time ever using OpenX or Smarty, and I only have a little bit of experience with the Zend framework. Please let me know if we are using the wrong approach.

    Read the article

  • A methology that allows for a single Java code base covering many different versions?

    - by Thorbjørn Ravn Andersen
    I work in a small shop where we have a LOT of legacy Cobol code and where a methology has been adopted to allow us to minimize forking and branching as much as possible. For a given release we have three levels: CORE - bottom layer, this code is common to all releases GROUP - optional code common to several customers. CUSTOMER - optional code specific for a single customer. When a program is needed, it is first searched for in CUSTOMER, then in GROUP and finally in CORE. A given application for us invokes many programs which all are looked for in this sequence (think exe files and PATH under Windows). We also have Java programs interacting with this legacy code, and as the core-group-customer lookup mehchanism does not lend it self easily to Java it has tended to grow in a CVS branch for each customer, requiring much too much maintainance. The Java part and the backend part tend to be developed in parallel. I have been assigned to figure out a way to make the two worlds meet. Essentially we want a Java enviornment which allows us to have a single code base with sources for each release, where we easily can select a group and a customer and work with the application as it goes for that customer, and then easily switch to another codeset and THAT customer. I was thinking of perhaps a scenario with an Eclipse project for each core, customer, and group and then use Project Sets to select those we need for a given scenario. The problem I cannot get my head about, is how we would create robust code in the CORE projects which will work regardless of which group and customer is selected. A Factory class which knows which sub class of a passed Class object to invoke instead of each and every new? Others must have had similar code base management problems. Anybody with experiences to share? EDIT: The conclusion to this problem above has been that CVS needs to be replaced with a source code management system better suited for dealing with many branches concurrently and the migration of source from one component to the other while keeping history. Inspired by the recent migration by slf4j and logback we are currently looking at git as it handles branches very well. We've considered subversion and mercurial too but git appears to be better for single location, multibranched projects. I've asked about Perforce in another question, but my personal inclination is towards open source solutions for something as crucial as this. EDIT: After some more pondering, we've found that our actual pain point is that we use branches in CVS, and that branches in CVS are the easiest to work with if you branch ALL files! The revised conclusion is that we can do this with CVS alone, by switching to a forest of java projects, each corresponding to one of the levels above, and use the Eclipse build paths to tie them together so each CUSTOMER version pulls in the appropriate GROUP and CORE project. We still want to switch to a better versioning system but this is so important a decision so we want to delay it as much as possible. EDIT: I now have a proof-of-concept implementation of the CORE-GROUP-CUSTOMER concept using Google Guice 2.0 - the @ImplementedBy tag is just what we need. I wonder what everybody else does? Using if's all over the place? EDIT: Now I also need this functionality for web applications. Guice was until the JSR-330 is in place. Anybody with versioning experience? EDIT: JSR-330/299 is now in place with the JEE6 reference implementation Weld based on JBoss Seam and I have reimplemented the proof-of-concept with Weld and can see that if we use @Alternative along with ... in beans.xml we can get the behaviour we desire. I.e. provide a new implementation for a given functionality in CORE without changing a bit in the CORE jars. Initial reading up on the Servlet 3.0 specification indicates that it may support the same functionality for web application resources (not code). We will now do initial testing on the real application.

    Read the article

  • Windows NT Service shutdown issues

    - by Jeremiah Gowdy
    I have developed middleware that provides RPC functionality to multiple client applications on multiple platforms within our organization. The middleware is written in C# and runs as a Windows NT Service. It handles things like file access to network shares, database access, etc. The middleware is hosted on two high end systems running Windows Server 2008 R2. When one of our server administrators goes to reboot the machine, primarily to do Windows Updates, there are serious problems with how the system behaves in regards to my NT Service. My service is designed to immediately stop listening for new connections, immediately start refusing new requests on existing connections, and otherwise shut down as rapidly as possible in the case of an OnStop or OnShutdown request from the SCM. Still, to maintain system integrity, operations that are currently in progress are allowed to continue for a reasonable time. Usually the server shuts down inside of 30 seconds (when the service is manually stopped for example). However, when the system is instructed to restart, my service immediately loses access to network drives and UNC paths, causing data integrity problems for any open files and partial writes to those locations. My service does list Workstation (and thus SMB Redirector) as a dependency, so I would think that my service would need to be stopped prior to Workstation/Redirector being stopped if Windows were honoring those dependencies. Basically, my application is forced to crash and burn, failing remote procedure calls and eventually being forced to terminate by the operating system after a timeout period has elapsed (seems to be on the order of 20-30 seconds). Unlike a Windows application, my Windows NT Service doesn't seem to have any power to stop a system shutdown in progress, delay the system shutdown, or even just the opportunity to save out any pending network share disk writes before being forcibly disconnected and shutdown. How is an NT Service developer supposed to have any kind of application integrity in this environment? Why is it that Forms Applications get all of the opportunity to finish their business prior to shutdown, while services seem to get no such benefits? I have tried: Calling SetProcessShutdownParameters via p/invoke to try to notify my application of the shutdown sooner to avoid Redirector shutting down before I do. Calling ServiceBase.RequestAdditionalTime with a value less than or equal to the two minute limit. Tweaking the WaitToKillServiceTimeout Everything I can think of to make my service shutdown faster. But in the end, I still get ~30 seconds of problematic time in which my service doesn't even seem to have been notified of an OnShutdown event yet, but requests are failing due to redirector no longer servicing my network share requests. How is this issue meant to be resolved? What can I do to delay or stop the shutdown, or at least be allowed to shut down my active tasks without Redirector services disappearing out from under me? I can understand what Microsoft is trying to do to prevent services from dragging their feet and showing shutdowns, but that seems like a great goal for Windows client operating systems, not for servers. I don't want my servers to shutdown fast, I want operational integrity and graceful shutdowns. Thanks in advance for any help you can provide. PS in regards to writing my own middleware, this is for a telephony application with sub-second "soft-realtime" response time requirements. It does make sense, and it's not a point I'm looking to debate. :)

    Read the article

  • Where are classpath, path and pathelement documented in Ant version 1.8.0?

    - by Robert Menteer
    I'm looking over the documentation that comes with Apache Ant version 1.8.0 and can't find where classpath, path and pathelement are documented. I've found a page that describes path like structures but it doesn't list the valid attributes or nested elements for these. Another thing I can't find in the documentation is a description of the relationships between filelist, fileset, patternset and path and how to convert them back and forth. For instance there has to be an easier way to compile only those classes in one package while removing all class dependencies on the package classes and update documentation. <!-- Get list of files in which we're interested. --> <fileset id = "java.source.set" dir = "${src}"> <include name = "**/Package/*.java" /> </fileset> <!-- Get a COMMA separated list of classes to compile. --> <pathconvert property = "java.source.list" refid = "java.source.set" pathsep = ","> <globmapper from = "${src}/*.@{src.extent}" to = "*.class" /> </pathconvert> <!-- Remove ALL dependencies on package classes. --> <depend srcdir = "${src}" destdir = "${build}" includes = "${java.source.list}" closure = "yes" /> <!-- Get a list of up to date classes. --> <fileset id = "class.uptodate.set" dir = "${build}"> <include name = "**/*.class" /> </fileset> <!-- Get list of source files for up to date classes. --> <pathconvert property = "java.uptodate.list" refid = "class.uptodate.set" pathsep = ","> <globmapper from="${build}/*.class" to="*.java" /> </pathconvert> <!-- Compile only those classes in package that are not up to date. --> <javac srcdir = "${src}" destdir = "${build}" classpathref = "compile.classpath" includes = "${java.source.list}" excludes = "${java.uptodate.list}"/> <!-- Get list of directories of class files for package. --: <pathconvert property = "class.dir.list" refid = "java.source.set" pathsep = ","> <globmapper from = "${src}/*.java" to = "${build}*" /> </pathconvert> <!-- Convert directory list to path. --> <path id = "class.dirs.path"> <dirset dir = "${build}" includes = "class.dir.list" /> </path> <!-- Update package documentation. --> <jdepend outputfile = "${docs}/jdepend-report.txt"> <classpath refid = "compile.classpath" /> <classpath location = "${build}" /> <classespath> <path refid = "class.dirs.path" /> </classespath> <exclude name = "java.*" /> <exclude name = "javax.*" /> </jdepend> Notice there's a number of conversions between filesets, paths and comma separated list just to get the proper 'type' required for the different ant tasks. Is there a way to simplify this while still processing the fewest files in a complex directory structure?

    Read the article

  • Build a gem with native extension (Gem::Installer::ExtensionBuildError)

    - by Arnaud Leymet
    I have the following configuration: uname -a : Linux 2.6.24.2 i686 GNU/Linux (Ubuntu) ruby -v : ruby 1.9.0 (2007-12-25 revision 14709) [i486-linux] rails -v : Rails 3.0.0.beta3 gem -v : 1.3.5 rake --version : rake, version 0.8.7 make -v : GNU Make 3.81 gem env : RUBYGEMS VERSION: 1.3.5 RUBY VERSION: 1.9.0 (2007-12-25 patchlevel 0) [i486-linux] INSTALLATION DIRECTORY: /usr/lib/ruby1.9/gems/1.9.0 RUBY EXECUTABLE: /usr/bin/ruby1.9 EXECUTABLE DIRECTORY: /usr/bin RUBYGEMS PLATFORMS: ruby x86-linux GEM PATHS: /usr/lib/ruby1.9/gems/1.9.0 /root/.gem/ruby/1.9.0 GEM CONFIGURATION: :update_sources = true :verbose = true :benchmark = false :backtrace = false :bulk_threshold = 1000 REMOTE SOURCES: http://gems.rubyforge.org/ And when I try this simple command: gem install nokogiri Here is what I get: # gem install nokogiri Building native extensions. This could take a while... ERROR: Error installing nokogiri: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for iconv.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxml/parser.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxslt/xslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libexslt/exslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for xmlParseDoc() in -lxml2... yes checking for xsltParseStylesheetDoc() in -lxslt... yes checking for exsltFuncRegister() in -lexslt... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetValidStructuredErrors()... yes checking for xmlSchemaSetValidStructuredErrors()... yes checking for xmlSchemaSetParserStructuredErrors()... yes creating Makefile make cc -I. -I/usr/include/libxml2 -I/usr/include -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETPARSERSTRUCTUREDERRORS -I/opt/local/include/ -I/opt/local/include/libxml2 -I/opt/local/include -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -g -DXP_UNIX -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline -o xml_document_fragment.o -c xml_document_fragment.c In the included file starting at ./nokogiri.h:75, From ./xml_document_fragment.h:4, From xml_document_fragment.c:1: ./xml_document.h:5:16: error: st.h : No file or folder with this type make: *** [xml_document_fragment.o] Error 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1/ext/nokogiri/gem_make.out The "gem_make.out" file contains the exact same information as described above. If I try with another gem: gem install gherkin Here is what I get: u# gem install gherkin Building native extensions. This could take a while... ERROR: Error installing gherkin: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for main() in -lc... yes creating Makefile make cc -I. -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -o gherkin_lexer_ar.o -c gherkin_lexer_ar.c /Users/aslakhellesoy/scm/gherkin/tasks/../ragel/i18n/ar.c.rl:11:16: erreur: re.h : Aucun fichier ou dossier de ce type make: *** [gherkin_lexer_ar.o] Erreur 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30/ext/gherkin_lexer_ar/gem_make.out In fact whenever I try to install a gem with native extension, I get the same type of error. Would that ring a bell to anyone?

    Read the article

  • A* (A-star) implementation in AS3

    - by Bryan Hare
    Hey, I am putting together a project for a class that requires me to put AI in a top down Tactical Strategy game in Flash AS3. I decided that I would use a node based path finding approach because the game is based on a circular movement scheme. When a player moves a unit he essentially draws a series of line segments that connect that a player unit will follow along. I am trying to put together a similar operation for the AI units in our game by creating a list of nodes to traverse to a target node. Hence my use of Astar (the resulting path can be used to create this line). Here is my Algorithm function findShortestPath (startN:node, goalN:node) { var openSet:Array = new Array(); var closedSet:Array = new Array(); var pathFound:Boolean = false; startN.g_score = 0; startN.h_score = distFunction(startN,goalN); startN.f_score = startN.h_score; startN.fromNode = null; openSet.push (startN); var i:int = 0 for(i= 0; i< nodeArray.length; i++) { for(var j:int =0; j<nodeArray[0].length; j++) { if(!nodeArray[i][j].isPathable) { closedSet.push(nodeArray[i][j]); } } } while (openSet.length != 0) { var cNode:node = openSet.shift(); if (cNode == goalN) { resolvePath (cNode); return true; } closedSet.push (cNode); for (i= 0; i < cNode.dirArray.length; i++) { var neighborNode:node = cNode.nodeArray[cNode.dirArray[i]]; if (!(closedSet.indexOf(neighborNode) == -1)) { continue; } neighborNode.fromNode = cNode; var tenativeg_score:Number = cNode.gscore + distFunction(neighborNode.fromNode,neighborNode); if (openSet.indexOf(neighborNode) == -1) { neighborNode.g_score = neighborNode.fromNode.g_score + distFunction(neighborNode,cNode); if (cNode.dirArray[i] >= 4) { neighborNode.g_score -= 4; } neighborNode.h_score=distFunction(neighborNode,goalN); neighborNode.f_score=neighborNode.g_score+neighborNode.h_score; insertIntoPQ (neighborNode, openSet); //trace(" F Score of neighbor: " + neighborNode.f_score + " H score of Neighbor: " + neighborNode.h_score + " G_score or neighbor: " +neighborNode.g_score); } else if (tenativeg_score <= neighborNode.g_score) { neighborNode.fromNode=cNode; neighborNode.g_score=cNode.g_score+distFunction(neighborNode,cNode); if (cNode.dirArray[i]>=4) { neighborNode.g_score-=4; } neighborNode.f_score=neighborNode.g_score+neighborNode.h_score; openSet.splice (openSet.indexOf(neighborNode),1); //trace(" F Score of neighbor: " + neighborNode.f_score + " H score of Neighbor: " + neighborNode.h_score + " G_score or neighbor: " +neighborNode.g_score); insertIntoPQ (neighborNode, openSet); } } } trace ("fail"); return false; } Right now this function creates paths that are often not optimal or wholly inaccurate given the target and this generally happens when I have nodes that are not path able, and I am not quite sure what I am doing wrong right now. If someone could help me correct this I would appreciate it greatly. Some Notes My OpenSet is essentially a Priority Queue, so thats how I sort my nodes by cost. Here is that function function insertIntoPQ (iNode:node, pq:Array) { var inserted:Boolean=true; var iterater:int=0; while (inserted) { if (iterater==pq.length) { pq.push (iNode); inserted=false; } else if (pq[iterater].f_score >= iNode.f_score) { pq.splice (iterater,0,iNode); inserted=false; } ++iterater; } } Thanks!

    Read the article

  • simplfy javascript code using regex

    - by Pradyut Bhattacharya
    Hi I have a code which can show youtube videos if there are any links to youtube in the text like for example the text example at:- pradyut.dyndns.org http://www.youtube.com/watch?v=-LiPMxFBLZY testing http://www.youtube.com/watch?v=Q3-l22b_Qg8&feature=related this text i m forwarding to the function... function to_youtubelink(text) { if ( text.indexOf ('<') > 0 || text.indexOf ('"') > 0 || text.indexOf ('>') > 0 ) return text; else { var obj_text = new Array(); var oi = 0; while(text.indexOf('http://') >=0) { //getting the paths var si = text.indexOf('http://'); var gr = text.indexOf('\n', si); var sp = text.indexOf(' ', si); var ei; if ( gr > 0 || sp > 0 ) { if ( gr >0 && sp > 0 ) { if ( gr < sp ) { ei = gr ; } else { ei = sp ; } } else if ( gr > 0) { ei = gr; } else { ei = sp; } } else { ei = text.length; } var it = text.substring(si,ei); if ( it.indexOf('"') > 0) { it.substring(0, it.indexOf('"') ); } if(ei < 0) ei = text.length; else ei = text.indexOf(' ', si) ; obj_text[oi] = it; text = text.replace( it, '[link_service]'); oi++; } var ob_text = new Array(); var ob =0; for (oi=0; oi<obj_text.length; oi++) { if ( is_youtubelink( obj_text[oi] ) ) { ob_text[ob] = to_utubelink(obj_text[oi]); ob++; } } oi = 0; while ( text.indexOf('[link_service]') >=0 ) { text = text.replace( '[link_service]', obj_text[oi]); oi ++; } for (ob=0; ob<ob_text.length; ob++) { text = text +"\n\n" + ob_text[ob]; } return text; } } function is_youtubelink(text) { var matches = text.match(/http:\/\/(?:www\.)?youtube.*watch\?v=([a-zA-Z0-9\-_]+)/); if (matches) { return true; } else { return false; } } function to_utubelink(text) { var video_id = text.split('v=')[1]; var ampersandPosition = video_id.indexOf('&'); if(ampersandPosition != -1) { video_id = video_id.substring(0, ampersandPosition); } text = "<iframe title=\"YouTube video player\" class=\"youtube-player\" type=\"text/html\" width=\"425\" height=\"350\" src=\"http://www.youtube.com/embed/" + video_id + "\" frameborder=\"0\"></iframe>" return text; } now i m getting the output properly... but i was thinking if the code could be done better and simplified using regex ...especially getting the urls part... thanks

    Read the article

  • Astar implementation in AS3

    - by Bryan Hare
    Hey, I am putting together a project for a class that requires me to put AI in a top down Tactical Strategy game in Flash AS3. I decided that I would use a node based path finding approach because the game is based on a circular movement scheme. When a player moves a unit he essentially draws a series of line segments that connect that a player unit will follow along. I am trying to put together a similar operation for the AI units in our game by creating a list of nodes to traverse to a target node. Hence my use of Astar (the resulting path can be used to create this line). Here is my Algorithm function findShortestPath (startN:node, goalN:node) { var openSet:Array = new Array(); var closedSet:Array = new Array(); var pathFound:Boolean = false; startN.g_score = 0; startN.h_score = distFunction(startN,goalN); startN.f_score = startN.h_score; startN.fromNode = null; openSet.push (startN); var i:int = 0 for(i= 0; i< nodeArray.length; i++) { for(var j:int =0; j<nodeArray[0].length; j++) { if(!nodeArray[i][j].isPathable) { closedSet.push(nodeArray[i][j]); } } } while (openSet.length != 0) { var cNode:node = openSet.shift(); if (cNode == goalN) { resolvePath (cNode); return true; } closedSet.push (cNode); for (i= 0; i < cNode.dirArray.length; i++) { var neighborNode:node = cNode.nodeArray[cNode.dirArray[i]]; if (!(closedSet.indexOf(neighborNode) == -1)) { continue; } neighborNode.fromNode = cNode; var tenativeg_score:Number = cNode.gscore + distFunction(neighborNode.fromNode,neighborNode); if (openSet.indexOf(neighborNode) == -1) { neighborNode.g_score = neighborNode.fromNode.g_score + distFunction(neighborNode,cNode); if (cNode.dirArray[i] >= 4) { neighborNode.g_score -= 4; } neighborNode.h_score=distFunction(neighborNode,goalN); neighborNode.f_score=neighborNode.g_score+neighborNode.h_score; insertIntoPQ (neighborNode, openSet); //trace(" F Score of neighbor: " + neighborNode.f_score + " H score of Neighbor: " + neighborNode.h_score + " G_score or neighbor: " +neighborNode.g_score); } else if (tenativeg_score <= neighborNode.g_score) { neighborNode.fromNode=cNode; neighborNode.g_score=cNode.g_score+distFunction(neighborNode,cNode); if (cNode.dirArray[i]>=4) { neighborNode.g_score-=4; } neighborNode.f_score=neighborNode.g_score+neighborNode.h_score; openSet.splice (openSet.indexOf(neighborNode),1); //trace(" F Score of neighbor: " + neighborNode.f_score + " H score of Neighbor: " + neighborNode.h_score + " G_score or neighbor: " +neighborNode.g_score); insertIntoPQ (neighborNode, openSet); } } } trace ("fail"); return false; } Right now this function creates paths that are often not optimal or wholly inaccurate given the target and this generally happens when I have nodes that are not path able, and I am not quite sure what I am doing wrong right now. If someone could help me correct this I would appreciate it greatly. Some Notes My OpenSet is essentially a Priority Queue, so thats how I sort my nodes by cost. Here is that function function insertIntoPQ (iNode:node, pq:Array) { var inserted:Boolean=true; var iterater:int=0; while (inserted) { if (iterater==pq.length) { pq.push (iNode); inserted=false; } else if (pq[iterater].f_score >= iNode.f_score) { pq.splice (iterater,0,iNode); inserted=false; } ++iterater; } } Thanks!

    Read the article

  • jQuery AutoComplete (jQuery UI 1.8rc3) with ASP.NET web service

    - by user296640
    Currently, I have this version of the autocomplete control working when returning XML from a .ashx handler. The xml looks like this: <?xml version="1.0" encoding="UTF-8" standalone="no" ?> <States> <State> <Code>CA</Code> <Name>California</Name> </State> <State> <Code>NC</Code> <Name>North Carolina</Name> </State> <State> <Code>SC</Code> <Name>South Carolina</Name> </State> The autocomplete code looks like this: $('.autocompleteTest').autocomplete( { source: function(request, response) { var list = []; $.ajax({ url: "http://commonservices.qa.kirkland.com/StateLookup.ashx", dataType: "xml", async: false, data: request, success: function(xmlResponse) { list = $("State", xmlResponse).map(function() { return { value: $("Code", this).text(), label: $("Name", this).text() }; }).get(); } }); response(list); }, focus: function(event, ui) { $('.autocompleteTest').val(ui.item.label); return false; }, select: function(event, ui) { $('.autocompleteTest').val(ui.item.label); $('.autocompleteValue').val(ui.item.value); return false; } }); For various reasons, I'd rather be calling an ASP.NET web service, but I can't get it to work. To change over to the service (I'm doing a local service to keep it simple), the start of the autocomplete code is: $('.autocompleteTest').autocomplete( { source: function(request, response) { var list = []; $.ajax({ url: "/Services/GeneralLookup.asmx/StateList", dataType: "xml", This code is on a page at the root of the site and the GeneralLookup.asmx is in a subfolder named Services. But a breakpoint in the web service never gets hit, and no autocomplete list is generated. In case it makes a difference, the XML that comes from the asmx is: <?xml version="1.0" encoding="utf-8" ?> <string xmlns="http://www.kirkland.com/"><State> <Code>CA</Code> <Name>California</Name> </State> <State> <Code>NC</Code> <Name>North Carolina</Name> </State> <State> <Code>SC</Code> <Name>South Carolina</Name> </State></string> Functionally equivalent since I never use the name of the root node in the mapping code. I haven't seen anything in the jQuery docs about calling a .asmx service from this control, but a .ajax call is a .ajax call, right? I've tried various different paths to the .asmx (~/Services/), and I've even moved the service to be in the same path to eliminate these issues. No luck with either. Any ideas?

    Read the article

  • How do I properly add existing source code files to my Xcode project?

    - by BeachRunnerJoe
    I'm new to iPhone development and I'm still getting familiar with the Mac dev environment, including Xcode. I want to add some 3rd party code to my iPhone project, but when I add the "existing files" to my Xcode project, I'm presented with a dialog box that has far too many options that I don't understand and, as such, my project isn't working. When I #import headerfilename.h, I get a build error that reads headerfilename.h: No such file or directory. Can anyone explain to me what all these options mean or give me a link to some documentation that can? I'm having a hard time finding anything in Apple's docs. Which options do I want to choose to add existing source code files to my Xcode project? I should note that the source code files that I'm trying to add are located in my project/Classes/frameworkname/ directory. After they're added, do I need to reference this new code directory in my project settings anywhere (i.e. some kind of header file directory variable)? Thanks so much! Update: I found the following answers/responses on the apple dev forums that were very useful and helped me fix my issue... To make it simple : - if you do not check the copy option, the file stay where it is. - if you check it, it is copied in your project folders In the first case (what it seems you are doing) you need to tell the compiler that the header files are in another directory : - project info - build - search paths - User Header Search Path : add the directory from where you took the header file Hope this will help You have discovered the most confusing dialog box that ever came out of Cupertino. Six years of Xcode, and this thing still is partly a mystery to me. To even get that far, I had to make many test projects to try and reverse-engineer what this thing does. The "Copy" box means that it will copy the files as they are right now, into the project. If this box is not checked, then it just references those files during a build and copies them as they are at THAT time. For source code, you want the Copy box checked. The "relative to" is a total mystery to me and I can't help you with that. I usually leave it however it is already set. Does it mean relative to where they are on disk, or the arrangement in Xcode, or in the bundle? Who knows. The last 2 radio buttons SEEM to mean that it will either re-create the folder structure of the folder you are adding, or just put "fake" folders in Xcode that point to the real folders. This is probably your problem - you are adding source code that is not all at the top level, and when it goes to find it, it does not re-create the hierarchy. Others can supply a better way, hopefully, but what I would do is put all of the source in one folder and add that, using the Copy box. Then in Xcode you can make whatever bogus folders you want and put the source file names in those fake folders.

    Read the article

  • XSLT Select all nodes containing a specific substing

    - by Mike
    I'm trying to write an XPath that will select certain nodes that contain a specific word. In this case the word is, "Lockwood". The correct answer is 3. Both of these paths give me 3. count(//*[contains(./*,'Lockwood')]) count(BusinessLetter/*[contains(../*,'Lockwood')]) But when I try to output the text of each specific node //*[contains(./*,'Lockwood')][1] //*[contains(./*,'Lockwood')][2] //*[contains(./*,'Lockwood')][3] Node 1 ends up containing all the text and nodes 2 and 3 are blank. Can some one please tell me what's happening or what I'm doing wrong. Thanks. <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="XPathFunctions.xsl"?> <BusinessLetter> <Head> <SendDate>November 29, 2005</SendDate> <Recipient> <Name Title="Mr."> <FirstName>Joshua</FirstName> <LastName>Lockwood</LastName> </Name> <Company>Lockwood &amp; Lockwood</Company> <Address> <Street>291 Broadway Ave.</Street> <City>New York</City> <State>NY</State> <Zip>10007</Zip> <Country>United States</Country> </Address> </Recipient> </Head> <Body> <List> <Heading>Along with this letter, I have enclosed the following items:</Heading> <ListItem>two original, execution copies of the Webucator Master Services Agreement</ListItem> <ListItem>two original, execution copies of the Webucator Premier Support for Developers Services Description between Lockwood &amp; Lockwood and Webucator, Inc.</ListItem> </List> <Para>Please sign and return all four original, execution copies to me at your earliest convenience. Upon receipt of the executed copies, we will immediately return a fully executed, original copy of both agreements to you.</Para> <Para>Please send all four original, execution copies to my attention as follows: <Person> <Name> <FirstName>Bill</FirstName> <LastName>Smith</LastName> </Name> <Address> <Company>Webucator, Inc.</Company> <Street>4933 Jamesville Rd.</Street> <City>Jamesville</City> <State>NY</State> <Zip>13078</Zip> <Country>USA</Country> </Address> </Person> </Para> <Para>If you have any questions, feel free to call me at <Phone>800-555-1000 x123</Phone> or e-mail me at <Email>[email protected]</Email>.</Para> </Body> <Foot> <Closing> <Name> <FirstName>Bill</FirstName> <LastName>Smith</LastName> </Name> <JobTitle>VP of Operations</JobTitle> </Closing> </Foot> </BusinessLetter>

    Read the article

  • How do I solve "Two different CRTLDLLs are loaded" when using packages in C++ Builder 2010?

    - by David M
    Hi, We are trying to split up our monolithic EXE into a combination of an EXE and several packages. So far, we have one package that we're trying to use, and when running the EXE Codeguard shows the following error on startup: CG Error Two different CRTLDLLs are loaded. CG might report false errors (C:\Windows\system32\CC32100MT.DLL) (D:\Projects\Foo\Bar.bpl) OK I read this as two different runtime libraries being loaded - one, the correct one (CC32100MT.dll), one incorrect, which is the package we're trying to use. Continuing to run the program shows odd errors, especially casting between classes or passing a pointer to a class as a parameter in a method that crosses the EXE/DLL boundary. Codeguard itself doesn't show any other errors at all though. How do we solve this? Some more details We've looked at as many things as we (the developer working on this and I) can collectively think of: Each project is built using runtime packages. The EXE host lists Bar in its package list. Each project is set to compile with dynamic RTL. However, changing this does not solve the problem. The package is linked to the EXE via its BPI file, but linking via a LIB makes no difference either. The EXE and BPL are compiled with the same project settings, where the same options exist for both types of project. We think, anyway :) There is only one copy of the BPL and BPI on the system: it's definitely linking to the right one. Examining the EXE and BPL with Depends and TDump show they are both using C:\Windows\system32\CC32100MT.DLL. They should both be using the one RTL. Creating a new project (a plain VCL forms application) and linking to the BPL (via its BPI) works fine. Something in the process of adding all the files and LIBs that make our EXE contain the code it needs to changes this, but we haven't been able to figure out what. The LIBs all either correspond to DLLs we use (flat C interface, usually look as though they were built with MSVC) or are simple projects with lots of related files, compiled to a lib for the purpose of linking into the EXE - these correspond roughly to the areas of the program we want to split to BPLs, by the way. There don't seem to be project options for the LIB projects that would affect RTL linking, unless we've missed them. I have exhaustively hunted through Depends and looked at all RTL and CC32*.dll files the EXE and every single DLL references. All are identical: rtl140.bpl and CC32100MT.DLL. Fully qualified paths show they are the same files, too. Everything should be using the one same run-time library. We're stumped. Absolutely stumped. We've had other problems using BPLs (they seem to be surprisingly tricky things, especially using C++) but have managed to solve them all. This one we've had no luck at all and we'd really appreciate any insights :) We're using C++Builder 2010 (as part of RAD Studio actually, but with little Delphi code apart from components.)

    Read the article

  • Unable to browse some pdfs and docs.

    - by JamesEggers
    I have a web site that uses Microsoft Indexing Service to index and query a directory that holds various documents of type pdf, rtf, mht, and doc. The indexing and querying works well (for the most part); however, some files will load while others will not. This is a Windows Server 2003 box running the site using IIS 6. The indexed directory is a sub directory off of the site's root directory (i.e. http://my.domain.com/files/). The file paths are accurate in the URL; however, I can only access some of the files of each file type. The files that I cannot access give a 404 File Not Found. I am able to open all files via windows explorer;however, attempting to open them via a browser over http is hit and miss. Has anyone experienced this issue and know how to resolve it? Anyone have any idea why I could access some files but not others? Does anyone have any recommendations on what to look into to try this (i.e. does owner matter or something like that?)? EDIT: Here is the Request and Response Headers for a bad file: GET /files/file1.pdf HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, application/x-silverlight, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, / Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.590; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: my.domain.com HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Mon, 01 Jun 2009 15:38:54 GMT [typical 404 page markup excluded] Here is the Request/Response headers for the good file: GET /files/file2.pdf HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, application/x-silverlight, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, / Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.590; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: my.domain.com HTTP/1.1 200 OK Content-Length: 352464 Content-Type: application/pdf Last-Modified: Tue, 13 Jan 2009 15:27:35 GMT Accept-Ranges: bytes ETag: "74ccc5759375c91:2a47" Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Mon, 01 Jun 2009 15:50:33 GMT

    Read the article

  • Foolishness Check: PHP Class finds Class file but not Class in the file.

    - by Daniel Bingham
    I'm at a loss here. I've defined an abstract superclass in one file and a subclass in another. I have required the super-classes file and the stack trace reports to find an include it. However, it then returns an error when it hits the 'extends' line: Fatal error: Class 'HTMLBuilder' not found in View/Markup/HTML/HTML4.01/HTML4_01Builder.php on line 7. I had this working with another class tree that uses factories a moment ago. I just added the builder layer in between the factories and the consumer. The factory layer looked almost exactly the same in terms of includes and dependencies. So that makes me think I must have done something silly that's causes the HTMLBuilder.php file to not be included correctly or interpreted correctly or some such. Here's the full stack trace (paths slightly altered): # Time Memory Function Location 1 0.0001 53904 {main}( ) ../index.php:0 2 0.0002 67600 require_once( 'View/Page.php' ) ../index.php:3 3 0.0003 75444 require_once( 'View/Sections/SectionFactory.php' ) ../Page.php:4 4 0.0003 81152 require_once( 'View/Sections/HTML/HTMLSectionFactory.php' ) ../SectionFactory.php:3 5 0.0004 92108 require_once( 'View/Sections/HTML/HTMLTitlebarSection.php' ) ../HTMLSectionFactory.php:5 6 0.0005 99716 require_once( 'View/Markup/HTML/HTMLBuilder.php' ) ../HTMLTitlebarSection.php:3 7 0.0005 103580 require_once( 'View/Markup/MarkupBuilder.php' ) ../HTMLBuilder.php:3 8 0.0006 124120 require_once( 'View/Markup/HTML/HTML4.01/HTML4_01Builder.php' ) ../MarkupBuilder.php:3 Here's the code in question: Parent class (View/Markup/HTML/HTMLBuilder.php): <?php require_once('View/Markup/MarkupBuilder.php'); abstract class HTMLBuilder extends MarkupBuilder { public abstract function getLink($text, $href); public abstract function getImage($src, $alt); public abstract function getDivision($id, array $classes=NULL, array $children=NULL); public abstract function getParagraph($text, array $classes=NULL, $id=NULL); } ?> Child Class, (View/Markup/HTML/HTML4.01/HTML4_01Builder.php): <?php require_once('HTML4_01Factory.php'); require_once('View/Markup/HTML/HTMLBuilder.php'); class HTML4_01Builder extends HTMLBuilder { private $factory; public function __construct() { $this->factory = new HTML4_01Factory(); } public function getLink($href, $text) { $link = $this->factory->getA(); $link->addAttribute('href', $href); $link->addChild($this->factory->getText($text)); return $link; } public function getImage($src, $alt) { $image = $this->factory->getImg(); $image->addAttribute('src', $src); $image->addAttribute('alt', $alt); return $image; } public function getDivision($id, array $classes=NULL, array $children=NULL) { $div = $this->factory->getDiv(); $div->setID($id); if(!empty($classes)) { $div->addClasses($classes); } if(!empty($children)) { $div->addChildren($children); } return $div; } public function getParagraph($text, array $classes=NULL, $id=NULL) { $p = $this->factory->getP(); $p->addChild($this->factory->getText($text)); if(!empty($classes)) { $p->addClasses($classes); } if(!empty($id)) { $p->setID($id); } return $p; } } ?> I would appreciate any and all ideas. I'm at a complete loss here as to what is going wrong. I'm sure it's something stupid I just can't see...

    Read the article

  • Building a modular Website with Zend Framework: Am I on the right way?

    - by Oliver
    Hi, i´m a little bit confused by reading all this posts an tutorials about staring with Zend, because there a so many different ways to solve a problem. I only need a feedback about my code to know if iam on the right way: To simply get a (hard coded) Navigation for my side (depending on who is logged in) i build a Controller Plugin with a postDispatch method that holds following code: public function postDispatch(Zend_Controller_Request_Abstract $request) { $menu = new Menu(); //Render menu in menu.phtml $view = new Zend_View(); //NEW view -> add View Helper $prefix = 'My_View_Helper'; $dir = dirname(__FILE__).'/../../View/Helper/'; $view->addHelperPath($dir,$prefix); $view->setScriptPath('../application/default/views/scripts/menu'); $view->menu = $menu->getMenu(); $this->getResponse()->insert('menu', $view->render('menu.phtml')); } Is it right that i need to set the helper path once again? I did this in a Plugin Controller named ViewSetup. There i do some setup for the view like doctype, headlinks, helper paths...(This step is from the book: Zend Framework in Action) The Menu class which is initiated looks like this: class Menu { protected $_menu = array(); /** * Menu for notloggedin and logged in */ public function getMenu() { $auth = Zend_Auth::getInstance(); $view = new Zend_View(); //check if user is logged in if(!$auth->hasIdentity()) { $this->_menu = array( 'page1' => array( 'label' => 'page1', 'title' => 'page1', 'url' => $view->url(array('module' => 'pages','controller' => 'my', 'action' => 'page1')) ), 'page2' => array( 'label' => 'page2', 'title' => 'page2', 'url' => $view->url(array('module' => 'pages','controller' => 'my', 'action' => 'page2')) ), 'page3' => array( 'label' => 'page3', 'title' => 'page3', 'url' => $view->url(array('module' => 'pages','controller' => 'my', 'action' => 'page3')) ), 'page4' => array( 'label' => 'page4', 'title' => 'page4', 'url' => $view->url(array('module' => 'pages','controller' => 'my', 'action' => 'page4')) ), 'page5' => array( 'label' => 'page5', 'title' => 'page5', 'url' => $view->url(array('module' => 'pages','controller' => 'my', 'action' => 'page5')) ) ); } else { //user is vom type 'client' //.. } return $this->_menu; } } Here´s my view script: <ul id="mainmenu"> <?php echo $this->partialLoop('menuItem.phtml',$this->menu) ?> </ul> This is working so far. My question is: is it usual to do it this way, is there anything to improve? I´m new to Zend and in the web are many deprecated tutorials which often is not obvious. Even the book is already deprecated where the autoloader is mentioned. Many thanks in advance

    Read the article

  • Database Table Schema and Aggregate Roots

    - by bretddog
    Hi, Applicaiton is single user, 1-tier(1 pc), database SqlCE. DataService layer will be (I think) : Repository returning domain objects and quering database with LinqToSql (dbml). There are obviously a lot more columns, this is simplified view. http://img573.imageshack.us/img573/3612/ss20110115171817w.png This is my first attempt of creating a 2 tables database. I think the table schema makes sense, but I need some reassurance or critics. Because the table relations looks quite scary to be honest. I'm hoping you could; Look at the table schema and respond if there are clear signs of troubles or errors that you spot right away.. And if you have time, Look at Program Summary/Questions, and see if the table layout makes makes sense to those points. Please be brutal, I will try to defend :) Program summary: a) A set of categories, each having a set of strategies (1:m) b) Each day a number of items will be produced. And each strategy MAY reference it. (So there can be 50 items, and a strategy may reference 23 of them) c) An item can be referenced by more than one strategy. So I think it's an m:m relation. d) Status values will be logged at fixed time-fractions through the day, for: - .... each Strategy.....each StrategyItem....each item e) An action on an item may be executed by a strategy that reference it. - This is logged as ItemAction (Could have called it StrategyItemAction) User Requsts b) - e) described the main activity mode of the program. To work with only today's DayLog , for each category. 2nd priority activity is retrieval of history, which typically will be From all categories, from day x to day y; Get all StrategyDailyLog. Questions First, does the overall layout look sound? I'm worried to see that there are so many relationships in all directions, connecting everything. Is this normal, or does it look like trouble? StrategyItem is made to represent an m:m relationship. Is it correct as I noted 1:m / 1:1 (marked red) ? StrategyItemTimeLog and ItemTimeLog; Logs values that both need to be retrieved together, when retreiving a StrategyItem. Reason I separated is that the first one is strategy-specific, and several strategies can reference same item. So I thought not to duplicate those values that are not dependent no strategy, but only on the item. Hence I also dragged out the LogTime, as it seems to be the only parameter to unite the logs. But this all looks quite disturbing with those 3 tables. Does it make sense at all? Or you have suggestion? Pink circles shows my vague attempt of Aggregate Root Paths. I've been thinking in terms of "what entity is responsible for delete". Though I'm unsure about the actual root. I think it's Category. Does it make sense related to User Requests described above?

    Read the article

  • Scala: Recursively building all pathes in a graph?

    - by DarqMoth
    Trying to build all existing paths for an udirected graph defined as a map of edges using the following algorithm: Start: with a given vertice A Find an edge (X.A, X.B) or (X.B, X.A), add this edge to path Find all edges Ys fpr which either (Y.C, Y.B) or (Y.B, Y.C) is true For each Ys: A=B, goto Start Providing edges are defined as the following map, where keys are tuples consisting of two vertices: val edges = Map( ("n1", "n2") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n1") -> "n5n1", ("n5", "n4") -> "n5n4") As an output I need to get a list of ALL pathes where each path is a list of adjecent edges like this: val allPaths = List( List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) //... //... more pathes to go } Note: Edge XY = (x,y) - "xy" and YX = (y,x) - "yx" exist as one instance only, either as XY or YX So far I have managed to implement code that duplicates edges in the path, which is wrong and I can not find the error: object Graph2 { type Vertice = String type Edge = ((String, String), String) type Path = List[((String, String), String)] val edges = Map( //(("v1", "v2") , "v1v2"), (("v1", "v3") , "v1v3"), (("v3", "v4") , "v3v4") //(("v5", "v1") , "v5v1"), //(("v5", "v4") , "v5v4") ) def main(args: Array[String]): Unit = { val processedVerticies: Map[Vertice, Vertice] = Map() val processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)] = Map() val path: Path = List() println(buildPath(path, "v1", processedVerticies, processedEdges)) } /** * Builds path from connected by edges vertices starting from given vertice * Input: map of edges * Output: list of connected edges like: List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) */ def buildPath(path: Path, vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)]): List[Path] = { println("V: " + vertice + " VM: " + processedVerticies + " EM: " + processedEdges) if (!processedVerticies.contains(vertice)) { val edges = children(vertice) println("Edges: " + edges) val x = edges.map(edge => { if (!processedEdges.contains(edge._1)) { addToPath(vertice, processedVerticies.++(Map(vertice -> vertice)), processedEdges, path, edge) } else { println("ALready have edge: "+edge+" Return path:"+path) path } }) val y = x.toList y } else { List(path) } } def addToPath( vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)], path: Path, edge: Edge): Path = { val newPath: Path = path ::: List(edge) val key = edge._1 val nextVertice = neighbor(vertice, key) val x = buildPath (newPath, nextVertice, processedVerticies, processedEdges ++ (Map((vertice, nextVertice) -> (vertice, nextVertice))) ).flatten // need define buidPath type x } def children(vertice: Vertice) = { edges.filter(p => (p._1)._1 == vertice || (p._1)._2 == vertice) } def containsPair(x: (Vertice, Vertice), m: Map[(Vertice, Vertice), (Vertice, Vertice)]): Boolean = { m.contains((x._1, x._2)) || m.contains((x._2, x._1)) } def neighbor(vertice: String, key: (String, String)): String = key match { case (`vertice`, x) => x case (x, `vertice`) => x } } Running this results in: List(List(((v1,v3),v1v3), ((v1,v3),v1v3), ((v3,v4),v3v4))) Why is that?

    Read the article

  • Wordpress blog with Joomla?

    - by user427902
    Hi, I had this Wordpress installation which was installed in a subfolder (not root). Like http: //server/blog/. Now, I installed Joomla on the root (http: //server/). Everything seems to be working fine with the Joomla part. However, the blog part is messed up. If I try to browse the homepage of my blog which is http: //server/blog/ it works like a charm. But while trying to view individual blog pages like say, http: //server/blog/some_category/some_post I get a Joomla 404 page. So, I was wondering if it was possible to use both Wordpress and Joomla in the same server in the setup I am trying to. Let me clarify that I am NOT looking to integrate user login and other such things. I just want the blog to be functional under a subfolder while I run the Joomla site in the root. So, what is the correct way to go about it. Can this be solved by any .config edits or something else? Edit: Here's the .htaccess for Joomla ... (I can't find any .htaccess for Wp though, still looking for it.) ## # @version $Id: htaccess.txt 14401 2010-01-26 14:10:00Z louis $ # @package Joomla # @copyright Copyright (C) 2005 - 2010 Open Source Matters. All rights reserved. # @license http://www.gnu.org/copyleft/gpl.html GNU/GPL # Joomla! is Free Software ## ##################################################### # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. # ##################################################### ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks # # mod_rewrite in use RewriteEngine On ########## Begin - Rewrite rules to block out some common exploits ## If you experience problems on your site block out the operations listed below ## This attempts to block the most common type of exploit `attempts` to Joomla! # ## Deny access to extension xml files (uncomment out to activate) #<Files ~ "\.xml$"> #Order allow,deny #Deny from all #Satisfy all #</Files> ## End of deny access to extension xml files RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L] # ########## End - Rewrite rules to block out some common exploits # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) # RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/index.php RewriteCond %{REQUEST_URI} (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ [NC] RewriteRule (.*) index.php RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] # ########## End - Joomla! core SEF Section

    Read the article

  • Visual Studio reports that not all code path return a value, even though they do

    - by chris12892
    I have an API in NETMF C# that I am writing that includes a function to send an HTTP request. For those who are familiar with NETMF, this is a heavily modified version of the "webClient" example, which a simple application that demonstrates how to submit an HTTP request, and recive a response. In the sample, it simply prints the response and returns void,. In my version, however, I need it to return the HTTP response. For some reason, Visual Studio reports that not all code paths return a value, even though, as far as I can tell, they do. Here is my code... /// <summary> /// This is a modified webClient /// </summary> /// <param name="url"></param> private string httpRequest(string url) { // Create an HTTP Web request. HttpWebRequest request = HttpWebRequest.Create(url) as HttpWebRequest; // Set request.KeepAlive to use a persistent connection. request.KeepAlive = true; // Get a response from the server. WebResponse resp = request.GetResponse(); // Get the network response stream to read the page data. if (resp != null) { Stream respStream = resp.GetResponseStream(); string page = ""; byte[] byteData = new byte[4096]; char[] charData = new char[4096]; int bytesRead = 0; Decoder UTF8decoder = System.Text.Encoding.UTF8.GetDecoder(); int totalBytes = 0; // allow 5 seconds for reading the stream respStream.ReadTimeout = 5000; // If we know the content length, read exactly that amount of // data; otherwise, read until there is nothing left to read. if (resp.ContentLength != -1) { for (int dataRem = (int)resp.ContentLength; dataRem > 0; ) { Thread.Sleep(500); bytesRead = respStream.Read(byteData, 0, byteData.Length); if (bytesRead == 0) throw new Exception("Data laes than expected"); dataRem -= bytesRead; // Convert from bytes to chars, and add to the page // string. int byteUsed, charUsed; bool completed = false; totalBytes += bytesRead; UTF8decoder.Convert(byteData, 0, bytesRead, charData, 0, bytesRead, true, out byteUsed, out charUsed, out completed); page = page + new String(charData, 0, charUsed); } page = new String(System.Text.Encoding.UTF8.GetChars(byteData)); } else throw new Exception("No content-Length reported"); // Close the response stream. For Keep-Alive streams, the // stream will remain open and will be pushed into the unused // stream list. resp.Close(); return page; } } Any ideas? Thanks...

    Read the article

  • Why Do I See the "In Recovery" Msg, and How Can I Prevent it?

    - by John Hansen
    The project I'm working on creates a local copy of the SQL Server database for each SVN branch you work on. We're running SQL Server 2008 Express with Advanced Services on our local machine to host it. When we create a new branch, the build script will create a new database with the ID of that branch, creates the schema objects, and copies over a selection of data from the production shadow server. After the database is created, it, or other databases on the local machine, will often go into "In Recovery" mode for several minutes. After several refreshes it comes up and is happy, but will occasionally go back into "In Recovery" mode. The database is created in simple recovery mode. The file names aren't specified, so it uses default paths for files. The size of the database after loading data is ~400 megs. It is running in SQL Server 2005 compatibility mode. The command that creates the database is: sqlcmd -S $(DBServer) -Q "IF NOT EXISTS (SELECT [name] FROM sysdatabases WHERE [name] = '$(DBName)') BEGIN CREATE DATABASE [$(DBName)]; print 'Created $(DBName)'; END" ...where $(DBName) and $(DBServer) are MSBuild parameters. I got a nice clean log file this morning. When I turned on my computer it starts all five databases. However, two of them show transactions being rolled forward and backwards. The it just keeps trying to start up all five of the databases. 2010-06-10 08:24:59.74 spid52 Starting up database 'ASPState'. 2010-06-10 08:24:59.82 spid52 Starting up database 'CommunityLibrary'. 2010-06-10 08:25:03.97 spid52 Starting up database 'DLG-R8441'. 2010-06-10 08:25:05.07 spid52 2 transactions rolled forward in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 0 transactions rolled back in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 Recovery is writing a checkpoint in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:11.23 spid52 Starting up database 'DLG-R8979'. 2010-06-10 08:25:12.31 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:13.17 spid52 2 transactions rolled forward in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 0 transactions rolled back in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 Recovery is writing a checkpoint in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:18.43 spid52 Starting up database 'Rls QA'. 2010-06-10 08:25:19.13 spid46s Starting up database 'DLG-R8979'. 2010-06-10 08:25:23.29 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:27.91 spid52 Starting up database 'ASPState'. 2010-06-10 08:25:29.80 spid41s Starting up database 'DLG-R8979'. 2010-06-10 08:25:31.22 spid52 Starting up database 'Rls QA'. In this case it kept trying to start the databases continuously until I shut down SQL Server at 08:48:19.72, 23 minutes later. Meanwhile, I actually am able to use the databases much of the time.

    Read the article

  • Perl Script to search and replace in .SQL query file with user inputs

    - by T.Mount
    I have a .SQL file containing a large number of queries. They are being run against a database containing data for multiple states over multiple years. The machine I am running this on can only handle running the queries for one state, in one year, at a time. I am trying to create a Perl script that takes user input for the state abbreviation, the state id number, and the year. It then creates a directory for that state and year. Then it opens the "base" .SQL file and searches and replaces the base state id and year with the user input, and saves this new .SQL file to the created directory. The current script I have (below) stops at open(IN,'<$infile') with "Can't open [filename]" It seems that it is having difficulty finding or opening the .SQL file. I have quadruple-checked to make sure the paths are correct, and I have even tried replacing the $path with an absolute path for the base file. If it was having trouble with creating the new file I'd have more direction, but since it can't find/open the base file I do not know how to proceed. #!/usr/local/bin/perl use Cwd; $path = getcwd(); #Cleans up the path $path =~ s/\\/\//sg; #User inputs print "What is the 2 letter state abbreviation for the state? Ex. 'GA'\n"; $stlet = <>; print "What is the 2 digit state abbreviation for the state? Ex. '13'\n"; $stdig = <>; print "What four-digit year are you doing the calculations for? Ex. '2008'\n"; $year = <>; chomp $stlet; chomp $stdig; chomp $year; #Creates the directory mkdir($stlet); $new = $path."\/".$stlet; mkdir("$new/$year"); $infile = '$path/Base/TABLE_1-26.sql'; $outfile = '$path/$stlet/$year/TABLE_1-26.sql'; open(IN,'<$infile') or die "Can't open $infile: $!\n"; open(OUT,">$infile2") or die "Can't open $outfile: $!\n"; print "Working..."; while (my $search = <IN>) { chomp $search; $search =~ s/WHERE pop.grp = 132008/WHERE pop.grp = $stdig$year/g; print OUT "$search\n"; } close(IN); close(OUT); I know I also probably need to tweak the regular expression some, but I'm trying to take things one at a time. This is my first Perl script, and I haven't really been able to find anything that handles .SQL files like this that I can understand. Thank you!

    Read the article

  • Java JNI leak in c++ process.

    - by user662056
    Hi all.. I am beginner in Java. My problem is: I am calling a Java class's method from c++. For this i am using JNI. Everythings works correct, but i have some memory LEAKS in the process of c++ program... So.. i did simple example.. 1) I create a java machine (jint res = JNI_CreateJavaVM(&jvm, (void**)&env, &vm_args);) 2) then i take a pointer on java class (jclass cls = env-FindClass("test_jni")); 3) after that i create a java class object object, by calling the constructor (testJavaObject = env-NewObject(cls, testConstruct);) AT THIS very moment in the process of c++ program is allocated 10 MB of memory 4) Next i delete the class , the object, and the Java Machine .. AT THIS very moment the 10 MB of memory are not free ................. So below i have a few lines of code c++ program void main() { { //Env JNIEnv *env; // java virtual machine JavaVM *jvm; JavaVMOption* options = new JavaVMOption[1]; //class paths options[0].optionString = "-Djava.class.path=C:/Sun/SDK/jdk/lib;D:/jms_test/java_jni_leak;"; // other options JavaVMInitArgs vm_args; vm_args.version = JNI_VERSION_1_6; vm_args.options = options; vm_args.nOptions = 1; vm_args.ignoreUnrecognized = false; // alloc part of memory (for test) before CreateJavaVM char* testMem0 = new char[1000]; for(int i = 0; i < 1000; ++i) testMem0[i] = 'a'; // create java VM jint res = JNI_CreateJavaVM(&jvm, (void**)&env, &vm_args); // alloc part of memory (for test) after CreateJavaVM char* testMem1 = new char[1000]; for(int i = 0; i < 1000; ++i) testMem1[i] = 'b'; //Creating java virtual machine jclass cls = env->FindClass("test_jni"); // Id of a class constructor jmethodID testConstruct = env->GetMethodID(cls, "<init>", "()V"); // The Java Object // Calling the constructor, is allocated 10 MB of memory in c++ process jobject testJavaObject = env->NewObject(cls, testConstruct); // function DeleteLocalRef, // In this very moment memory not free env->DeleteLocalRef(testJavaObject); env->DeleteLocalRef(cls); // 1!!!!!!!!!!!!! res = jvm->DestroyJavaVM(); delete[] testMem0; delete[] testMem1; // In this very moment memory not free. TO /// } int gg = 0; } java class (it just allocs some memory) import java.util.*; public class test_jni { ArrayList<String> testStringList; test_jni() { System.out.println("start constructor"); testStringList = new ArrayList<String>(); for(int i = 0; i < 1000000; ++i) { // ??????? ?????? testStringList.add("TEEEEEEEEEEEEEEEEST"); } } } process memory view, after crating javaVM and java object: testMem0 and testMem1 - test memory, that's allocated by c++. ************** testMem0 ************** JNI_CreateJavaVM ************** testMem1 ************** // create java object jobject testJavaObject = env->NewObject(cls, testConstruct); ************** process memory view, after destroy javaVM and delete ref on java object: testMem0 and testMem1 are deleted to; ************** JNI_CreateJavaVM ************** // create java object jobject testJavaObject = env->NewObject(cls, testConstruct); ************** So testMem0 and testMem1 is deleted, But JavaVM and Java object not.... Sow what i do wrong... and how i can free memory in the c++ process program.

    Read the article

  • Complete Guide to Symbolic Links (symlinks) on Windows or Linux

    - by Matthew Guay
    Want to easily access folders and files from different folders without maintaining duplicate copies?  Here’s how you can use Symbolic Links to link anything in Windows 7, Vista, XP, and Ubuntu. So What Are Symbolic Links Anyway? Symbolic links, otherwise known as symlinks, are basically advanced shortcuts. You can create symbolic links to individual files or folders, and then these will appear like they are stored in the folder with the symbolic link even though the symbolic link only points to their real location. There are two types of symbolic links: hard and soft. Soft symbolic links work essentially the same as a standard shortcut.  When you open a soft link, you will be redirected to the folder where the files are stored.  However, a hard link makes it appear as though the file or folder actually exists at the location of the symbolic link, and your applications won’t know any different. Thus, hard links are of the most interest in this article. Why should I use Symbolic Links? There are many things we use symbolic links for, so here’s some of the top uses we can think of: Sync any folder with Dropbox – say, sync your Pidgin Profile Across Computers Move the settings folder for any program from its original location Store your Music/Pictures/Videos on a second hard drive, but make them show up in your standard Music/Pictures/Videos folders so they’ll be detected my your media programs (Windows 7 Libraries can also be good for this) Keep important files accessible from multiple locations And more! If you want to move files to a different drive or folder and then symbolically link them, follow these steps: Close any programs that may be accessing that file or folder Move the file or folder to the new desired location Follow the correct instructions below for your operating system to create the symbolic link. Caution: Make sure to never create a symbolic link inside of a symbolic link. For instance, don’t create a symbolic link to a file that’s contained in a symbolic linked folder. This can create a loop, which can cause millions of problems you don’t want to deal with. Seriously. Create Symlinks in Any Edition of Windows in Explorer Creating symlinks is usually difficult, but thanks to the free Link Shell Extension, you can create symbolic links in all modern version of Windows pain-free.  You need to download both Visual Studio 2005 redistributable, which contains the necessary prerequisites, and Link Shell Extension itself (links below).  Download the correct version (32 bit or 64 bit) for your computer. Run and install the Visual Studio 2005 Redistributable installer first. Then install the Link Shell Extension on your computer. Your taskbar will temporally disappear during the install, but will quickly come back. Now you’re ready to start creating symbolic links.  Browse to the folder or file you want to create a symbolic link from.  Right-click the folder or file and select Pick Link Source. To create your symlink, right-click in the folder you wish to save the symbolic link, select “Drop as…”, and then choose the type of link you want.  You can choose from several different options here; we chose the Hardlink Clone.  This will create a hard link to the file or folder we selected.  The Symbolic link option creates a soft link, while the smart copy will fully copy a folder containing symbolic links without breaking them.  These options can be useful as well.   Here’s our hard-linked folder on our desktop.  Notice that the folder looks like its contents are stored in Desktop\Downloads, when they are actually stored in C:\Users\Matthew\Desktop\Downloads.  Also, when links are created with the Link Shell Extension, they have a red arrow on them so you can still differentiate them. And, this works the same way in XP as well. Symlinks via Command Prompt Or, for geeks who prefer working via command line, here’s how you can create symlinks in Command Prompt in Windows 7/Vista and XP. In Windows 7/Vista In Windows Vista and 7, we’ll use the mklink command to create symbolic links.  To use it, we have to open an administrator Command Prompt.  Enter “command” in your start menu search, right-click on Command Prompt, and select “Run as administrator”. To create a symbolic link, we need to enter the following in command prompt: mklink /prefix link_path file/folder_path First, choose the correct prefix.  Mklink can create several types of links, including the following: /D – creates a soft symbolic link, which is similar to a standard folder or file shortcut in Windows.  This is the default option, and mklink will use it if you do not enter a prefix. /H – creates a hard link to a file /J – creates a hard link to a directory or folder So, once you’ve chosen the correct prefix, you need to enter the path you want for the symbolic link, and the path to the original file or folder.  For example, if I wanted a folder in my Dropbox folder to appear like it was also stored in my desktop, I would enter the following: mklink /J C:\Users\Matthew\Desktop\Dropbox C:\Users\Matthew\Documents\Dropbox Note that the first path was to the symbolic folder I wanted to create, while the second path was to the real folder. Here, in this command prompt screenshot, you can see that I created a symbolic link of my Music folder to my desktop.   And here’s how it looks in Explorer.  Note that all of my music is “really” stored in C:\Users\Matthew\Music, but here it looks like it is stored in C:\Users\Matthew\Desktop\Music. If your path has any spaces in it, you need to place quotes around it.  Note also that the link can have a different name than the file it links to.  For example, here I’m going to create a symbolic link to a document on my desktop: mklink /H “C:\Users\Matthew\Desktop\ebook.pdf”  “C:\Users\Matthew\Downloads\Before You Call Tech Support.pdf” Don’t forget the syntax: mklink /prefix link_path Target_file/folder_path In Windows XP Windows XP doesn’t include built-in command prompt support for symbolic links, but we can use the free Junction tool instead.  Download Junction (link below), and unzip the folder.  Now open Command Prompt (click Start, select All Programs, then Accessories, and select Command Prompt), and enter cd followed by the path of the folder where you saved Junction. Junction only creates hard symbolic links, since you can use shortcuts for soft ones.  To create a hard symlink, we need to enter the following in command prompt: junction –s link_path file/folder_path As with mklink in Windows 7 or Vista, if your file/folder path has spaces in it make sure to put quotes around your paths.  Also, as usual, your symlink can have a different name that the file/folder it points to. Here, we’re going to create a symbolic link to our My Music folder on the desktop.  We entered: junction -s “C:\Documents and Settings\Administrator\Desktop\Music” “C:\Documents and Settings\Administrator\My Documents\My Music” And here’s the contents of our symlink.  Note that the path looks like these files are stored in a Music folder directly on the Desktop, when they are actually stored in My Documents\My Music.  Once again, this works with both folders and individual files. Please Note: Junction would work the same in Windows 7 or Vista, but since they include a built-in symbolic link tool we found it better to use it on those versions of Windows. Symlinks in Ubuntu Unix-based operating systems have supported symbolic links since their inception, so it is straightforward to create symbolic links in Linux distros such as Ubuntu.  There’s no graphical way to create them like the Link Shell Extension for Windows, so we’ll just do it in Terminal. Open terminal (open the Applications menu, select Accessories, and then click Terminal), and enter the following: ln –s file/folder_path link_path Note that this is opposite of the Windows commands; you put the source for the link first, and then the path second. For example, let’s create a symbolic link of our Pictures folder in our Desktop.  To do this, we entered: ln -s /home/maguay/Pictures /home/maguay/Desktop   Once again, here is the contents of our symlink folder.  The pictures look as if they’re stored directly in a Pictures folder on the Desktop, but they are actually stored in maguay\Pictures. Delete Symlinks Removing symbolic links is very simple – just delete the link!  Most of the command line utilities offer a way to delete a symbolic link via command prompt, but you don’t need to go to the trouble.   Conclusion Symbolic links can be very handy, and we use them constantly to help us stay organized and keep our hard drives from overflowing.  Let us know how you use symbolic links on your computers! Download Link Shell Extension for Windows 7, Vista, and XP Download Junction for XP Similar Articles Productive Geek Tips Using Symlinks in Windows VistaHow To Figure Out Your PC’s Host Name From the Command PromptInstall IceWM on Ubuntu LinuxAdd Color Coding to Windows 7 Media Center Program GuideSync Your Pidgin Profile Across Multiple PCs with Dropbox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84  | Next Page >