Search Results

Search found 17278 results on 692 pages for 'directory conventions'.

Page 345/692 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • What ORM for .NET should I use?

    - by eKek0
    I'm relatively new to .NET and have being using Linq2Sql for a almost a year, but it lacks some of the features I'm looking for now. I'm going to start a new project in which I want to use an ORM with the following characteristics: It has to be very productive, I don't want to be dealing with the access layer to save or retrieve objects from or to the database, but it should allows me to easily tweak any object before actually commit it to the database; also it should allows me to work easily with a changing database schema It should allows me to extend the objects mapped from the database, for example to add virtual attributes to them (virtual columns to a table) It has to be (at least almost) database agnostic, it should allows me to work with different databases in a transparent way It has to have not so much configuration or must be based on conventions to make it work It should allows me to work with Linq So, do you know any ORM that I could use? Thank you for your help. EDIT I know that an option is to use NHibernate. This appears as the facto standard for enterprise level applications, but also it seems that is not very productive because its deep learning curve. In other way, I have read in some other post here in SO that it doesn't integrate well with Linq. Is all of that true?

    Read the article

  • homebrew path issue

    - by Shaun Stanislaus
    Master:~ shaunstanislaus$ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) ==> This script will install: /usr/local/bin/brew /usr/local/Library/... /usr/local/share/man/man1/brew.1 Press enter to continue ==> Downloading and Installing Homebrew... remote: Counting objects: 82368, done. remote: Compressing objects: 100% (39323/39323), done. remote: Total 82368 (delta 56782), reused 65301 (delta 42220) Receiving objects: 100% (82368/82368), 11.68 MiB | 1.59 MiB/s, done. Resolving deltas: 100% (56782/56782), done. From https://github.com/mxcl/homebrew * [new branch] master -> origin/master HEAD is now at 2ea1a0e smpeg: depends on gtk ==> Installation successful! You should run `brew doctor' *before* you install anything. Now type: brew help Master:~ shaunstanislaus$ brew doctor -bash: /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory Master:~ shaunstanislaus$ echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Users/shaunstanislaus/Library/Application Support/GoodSync:/opt/local/bin:/opt/local/sbin:/usr/local/sbin:/Users/shaunstanislaus/.ec2/bin:/Users/shaunstanislaus/.rvm/bin /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory how do i fix this path issue? i can't use brew command and i think i previously symlink to wrong location. please advice, thank you.

    Read the article

  • getting expat to use .dtd for entity replacement in python

    - by nicolas78
    I'm trying to read in an xml file which looks like this <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE dblp SYSTEM "dblp.dtd"> <dblp> <incollection> <author>Jos&eacute; A. Blakeley</author> </incollection> </dblp> The point that creates the problem looks is the Jos&eacute; A. Blakeley part: The parser calls its character handler twice, once with "Jos", once with " A. Blakeley". Now I understand this may be the correct behaviour if it doesn't know the eacute entity. However, this is defined in the dblp.dtd, which I have. I don't seem to be able to convince expat to use this file, though. All I can say is p = xml.parsers.expat.ParserCreate() # tried with and without following line p.SetParamEntityParsing(xml.parsers.expat.XML_PARAM_ENTITY_PARSING_ALWAYS) p.UseForeignDTD(True) f = open(dblp_file, "r") p.ParseFile(f) but expat still doesn't recognize my entity. Why is there no way to tell expat which DTD to use? I've tried putting the file into the same directory as the XML putting the file into the program's working directory replacing the reference in the xml file by an absolute path What am I missing? Thx.

    Read the article

  • Unwanted Shell expansion when assigning the output of a shell command to a variable

    - by Rob Goodwin
    I am exporting a portion of a local prototypte svn repository to import into a different repo. We have a number of svn properties set throughout the repo so I figured I would write a script to list the file elements and their corresponding properties. How hard can that be right. So I write started writing a bash script that would assign the output of the svn proplist -v to a variable so I could check if the specified file had any properties. #!/bin/bash svn proplist -v $1 o=$(svn proplist -v "$1") echo $o now this works fine and echos the output of the svn proplist command. But if the proplist command returns something like svn:ignore : * build it performs a shell expansion on the * and inserts the entire directory listing prior to the build property value. So if the directory had a.txt, b.txt and build files/dirs in it, the output would look like. svn:ignore a.txt b.txt build I figure I need to somehow escape the output or something to keep the expansion from happening, but have yet to find something that works. There are other ways to do this, but I hate when I cannot figure something out. and I have to admin, I think this one beat me ( well given the time I can spend on it )

    Read the article

  • What are some typing patterns using a standard QWERTY keyboard that work well for you as a programme

    - by OrbMan
    After hunting and pecking for about 35 years, I have decided to learn to type. I am learning QWERTY and have learned about 2/3 of the letters so far. While learning, I have noticed how asymmeterical the keyboard is, which really bothers me. (I will probably switch to a symmetrical keyboard eventually, but for now am trying to do everything as standard and "correct" as possible.) Although I am not there yet in my lessons, it seems that many of the keys I am going to use as a C# web developer are supposed to be typed by the pinky of my right hand. Are there any typing patterns you have developed that are more ergonomic (or faster) when typing large volumes of code rife with braces, colons, semi-colons and quotes? Or, should I just accept the fact that every other key is going to be hit with my right pinky? It is not that speed is such a huge concern, as much as that it seems so inefficient to rely on one finger so much... As an example, some of the conventions I use as a hunt and pecker, like typing open and close braces right away with my index and middle finger, and then hitting the left arrow key to fill in the inner content, don't seem to work as well with just a pinky. What are some typing patterns using a standard QWERTY keyboard that work really well for you as a programmer? Update: US layout and I use home row Update 2: Despite my best efforts to the contrary, people are interpreting this questionas "how do I learn to type" or "what keyboard should I use". Take it as a given, that I will learn to type, and that I will be doing so on a standard QWERTY layout keyboard, not DVORAK. I am interested in aquiring a skill that will be useful wherever I go.

    Read the article

  • Authentication Error when accessing Sharepoint list via web service

    - by Joe
    I wrote a windows service a few months ago that would ping a Sharepoint list using _vti_bin/lists.asmx function GetListItemChanges. It was working fine until a few weeks ago when my company upgraded our Sharepoint instance to SP1. Now whenever my service attempts to access Sharepoint I receive an 401.1 authentication error: Error: You are not authorized to view this page You do not have permission to view this directory or page using the credentials that you supplied. Please try the following: Contact the Web site administrator if you believe you should be able to view this directory or page. HTTP Error 401.1 - Unauthorized: Access is denied due to invalid credentials. Internet Information Services (IIS) I have checked and my privileges on the site have not changed. here is the code In which I call the list: Lists listsService = new Lists(); listsService.Credentials = new NetworkCredential("UserName", "Password", "domain"); Result = listsService.GetListItemChanges("List name", null, dTime.ToString(), null); It has also been brought to my attention that basic authentication may have been disabled on our farm. I don't believe I'm using that but I may be mistaken.

    Read the article

  • TeamCity output artifacts not published to IIS7 folder

    - by clausas
    I am trying to set up TeamCity to build and deploy an ASP.NET MVC application. I have the setup running successfully on other servers using TeamCity 4.5, but the new server is running TeamCity 6, and I am having trouble getting it to work as expected. TeamCity manages to get the files from source control, and the project (Visual Studio Solution 2008 set to "Build") builds and outputs the necessary files as expected. The problem seems to be with my artifact paths, as the output files are not copied to the website folder. My solution consists of dozen projects, of which the "Web" project is the interesting one in this case. The build checkout directory is C:\TeamCity\buildAgent\work\7da320cebf0ee541, and the "Web"-project is found in C:\TeamCity\buildAgent\work\7da320cebf0ee541\Web I have set up my build configuration with the following artifact paths (relative from checkout directory to the folder containing the website): Web/bin=>../../../../inetpub/wwwroot/staging/bin Web/Content=>../../../../inetpub/wwwroot/staging/Content Web/Views=>../../../../inetpub/wwwroot/staging/Views Web/Media=>../../../../inetpub/wwwroot/staging/Media Web/*.aspx=>../../../../inetpub/wwwroot/staging Web/*.asax=>../../../../inetpub/wwwroot/staging (I've tried with more ../ just in case, but it didn't make a difference). This is the output I get from the log [19:35:29]: Publishing artifacts (1s) [19:35:29]: [Publishing artifacts] Paths to publish: [Web/bin=../../../../inetpub/wwwroot/staging/bin, Web/Content=../../../../inetpub/wwwroot/staging/Content, Web/obj=../../../../inetpub/wwwroot/staging/obj, Web/Views=../../../../inetpub/wwwroot/staging/Views, Web/Media=../../../../inetpub/wwwroot/staging/Media, Web/.aspx=../../../../inetpub/wwwroot/staging, Web/.asax=../../../../inetpub/wwwroot/staging, teamcity-info.xml] [19:35:30]: [Publishing artifacts] Sending files [19:35:32]: Build finished Logs from some of the other servers running TeamCity 4.5 uses a different format, with a line for each of the artifacts being published, I'm not sure if this is relevant or only due to a different logging format. Everything seems to be working, but no files are put in my website folder after a build, am I missing something here? Any help will be much appreciated :)

    Read the article

  • Attachment_fu file saving problem

    - by Anand
    Attachment_fu plugin is kind of old, but I have to modify an old app and I can't use another plugin like paperclip etc. So here's the code without further ado Submissions table structure --------------------------- | content_type | varchar(255) | YES | | NULL | filename | varchar(255) | YES | | NULL app/models/submission.rb ------------------------ has_attachment :storage => :file_system, :path_prefix => 'public/submissions', :max_size => 2.megabytes, :content_type => ['application/pdf', 'application/msword', 'text/plain'] app/models/user.rb ------------------ has_one :submission, :dependent => :destroy app/views/user/some_action.html.erb ----------------------------------- <% form_for :user, :url => { :action => "some_action" }, :html => {:multipart => true} do |f| %> .... <%= file_field_tag "submission[uploaded_data]" %> <%end%> app/controllers/user_controller.rb ---------------------------------- @user = User.find_user(session[:user_id]) @submission = @user.submission if request.post? @submission.uploaded_data = params[:submission][:uploaded_data] end When the form is submitted, the database fields "content_type" and "filename" get updated and display the correct values, but the file does not appear in public/submissions/ directory. I have checked the permissions on the submissions directory. What am I missing? Many Thanks

    Read the article

  • Sql Server Backup and move backup file: How to cope with file permissions?

    - by Stefan Steinegger
    With our product we have a simple backup tool for the sql server database. This tool should just make a full backup and restore to and from any folder. Of course, the user (usually an administrator) needs permission to write to the target folder. To avoid the problem of not being able to perform a backup to a network drive, I write the backup to a temp file in the Sql Server backup directory. Then I move it to the target folder. This requires permission to delete the temporary file from the sql servers backup folder. Restore is the same in the other direction. This seemed to work fine until someone tested it on vista, where the user does not have write access to the backup folder by default. So there are many solutions to solve this, but none of them seemed to be really nice. One solution would be to find another folder for the temporary file. Both the sql server user as well as the administrator performing the backup need read and write permissions. Is there such a directory? Any other ideas? Thanks a lot.

    Read the article

  • Network Authentication when running exe from WMI

    - by Andy
    Hi, I have a C# exe that needs to be run using WMI and access a network share. However, when I access the share I get an UnauthorizedAccessException. If I run the exe directly the share is accessible. I am using the same user account in both cases. There are two parts to my application, a GUI client that runs on a local PC and a backend process that runs on a remote PC. When the client needs to connect to the backend it first launches the remote process using WMI (code reproduced below). The remote process does a number of things including accessing a network share using Directory.GetDirectories() and reports back to the client. When the remote process is launched automatically by the client using WMI, it cannot access the network share. However, if I connect to the remote machine using Remote Desktop and manually launch the backend process, access to the network share succeeds. The user specifed in the WMI call and the user logged in for the Remote Desktop session are the same, so the permissions should be the same, shouldn't they? I see in the MSDN entry for Directory.Exists() it states "The Exists method does not perform network authentication. If you query an existing network share without being pre-authenticated, the Exists method will return false." I assume this is related? How can I ensure the user is authenticated correctly in a WMI session? ConnectionOptions opts = new ConnectionOptions(); opts.Username = username; opts.Password = password; ManagementPath path = new ManagementPath(string.Format("\\\\{0}\\root\\cimv2:Win32_Process", remoteHost)); ManagementScope scope = new ManagementScope(path, opts); scope.Connect(); ObjectGetOptions getOpts = new ObjectGetOptions(); using (ManagementClass mngClass = new ManagementClass(scope, path, getOpts)) { ManagementBaseObject inParams = mngClass.GetMethodParameters("Create"); inParams["CommandLine"] = commandLine; ManagementBaseObject outParams = mngClass.InvokeMethod("Create", inParams, null); }

    Read the article

  • Configuring mod_rewrite and mod_jk for Apache 2.2 and JBoss 4.2.3

    - by The Pretender
    Hello! My problem is as follows: I have JBoss 4.2.3 application server with AJP 1.3 connector running on one host under Windows (192.168.1.2 for my test environment) and Apache 2.2.14 running on another FreeBSD box (192.168.1.10). Apache acts as a "front gate" for all requests and sends them to JBoss via mod_jk. Everything was working fine until I had to do some SEO optimizations. These optimizations include SEF urls, so i decided to use mod_rewrite for Apache to alter requests before they are sent to JBoss. Basically, I nedd to implement 2 rules: Redirect old rules like "http://hostname/directory/" to "http://hostname/" with permanent redirect Forward urls like "http://hostname/wtf/123/" to "http://hostname/wtf/view.htm?id=123" so that end user doesn't see the "ugly" URL (the actual rewrite). Here is my Apache config for test virtual host: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/usr/local/www/dummy" ServerName 192.168.1.10 <IfModule mod_rewrite.c> RewriteEngine On RewriteRule /directory/(.*) /$1 [R=permanent,L] RewriteRule ^/([^/]+)/([0-9]+)/?$ /$1/view.htm?id=$2 </IfModule> JkMount /* jsp-hostname ErrorLog "/var/log/dummy-host.example.com-error_log" CustomLog "/var/log/dummy-host.example.com-access_log" common </VirtualHost> The problem is that second rewrite rule doesn't work. Requests slip through to JBoss unchanged, so I get Tomcat 404 error. But if I add redirect flag to the second rule like RewriteRule ^/([^/]+)/([0-9]+)/?$ /$1/view.htm?id=$2 [R,L] it works like a charm. But redirect is not what I need here :) . I suspect that the problem is that requests are forwarded to the another host (192.168.1.2), but I really don't have any idea on how to make it work. Any help would be appreciated :)

    Read the article

  • Code Organization Connundrum: Web Project With Multiple Supporting DLLs?

    - by Code Sherpa
    Hi. I am trying to get a handle on the best practice for code organization within my project. I have looked around on the internet for good examples and, so far, I have seen examples of a web project with one or multiple supporting class libraries that it references or a web project with sub-folders that follow its namespace conventions. Assuming there is no right answer, this is what I currently have for code organization: MyProjectWeb This is my web site. I am referencing my class libraries here. MyProject.DLL As the base namespace, I am using this DLL for files that need to be generally consumable. For example, my class "Enums" that has all the enumerations in my project lives there. As does class MyProjectException for all exception handling. MyProject.IO.DLL This is a grouping of maybe 20 files that handle file upload and download (so far). MyProject.Utilities.DLL ALl my common classes and methods bunched up together in one generally consumable DLL. Each class follows a "XHelper" convention such as "SqlHelper, AuthHelper, SerializationHelper, and so on... MyProject.Web.DLL I am using this DLL as the main client interface. Right now, the majority of class files here are: 1) properties (such as School, Location, Account, Posts) 2) authorization stuff ( such as custom membership, custom role, & custom profile providers) My question is simply - does this seem logical? Also, how do I avoid having to cross reference DLLs from one project library to the next? For example, MyProject.Web.DLL uses code from MyProject.Utilities.DLL and MyProject.Utilities.DLL uses code from MyProject.DLL. Is this solved by clicking on properties and selecting "Dependencies"? I tried that but still don't seem to be accessing the namespaces of the assembly I have selected. Do I have to reference every assembly I need for each class library? Responses appreciated and thanks for your patience.

    Read the article

  • Debugging fortran code in Eclipse with Photran and GDB debugger: missing symbols

    - by tvandenbrande
    I have a program, written in fortran90, previously successfully compiled on a compaq compiler and working, that I'm now trying to compile with gfortran. I can compile the code to an .exe and run it. It works fine until a certain point in the routine and then an error is thrown. My current configuration: Windows 7 Eclipse Juno with CDT Photran Cygwin installation with gfortran compiler and GDB debugger (gdb.exe) Configurations for the debugger: GDB command set: Standard (Windows) Protocol: mi Shared libraries: don't load shared library symbols automatically (when activating this, no changes are noted). When running the debug command I get the following output: .gdbinit: No such file or directory. Reading symbols from /cygdrive/c/Users/thys/Documents/doctoraat/12_in progress/Hamfem/Debug/Hamfem.exe...done. auto-solib-add on Undefined command: "auto-solib-add". Try "help". Warning: C:/Users/thys/Documents/doctoraat/12_in progress/Hamfem/Hamfem/in: No such file or directory. [New Thread 5816.0x1914] [New Thread 5816.0x654] Basicly that leaves me with 2 questions: Where can I find the .gdbinit file in the cygwin installation? Are there any other possible errors in my setup, or points to think about?

    Read the article

  • PHP import functions

    - by ninuhadida
    Hi, I'm trying to find the best pragmatic approach to import functions on the fly... let me explain. Say I have a directory called functions which has these files: array_select.func.php stat_mediam.func.php stat_mean.func.php ..... I would like to: load each individual file (which has a function defined inside) and use it just like an internal php function.. such as array_pop(), array_shift(), etc. Once I stumbled on a tutorial (which I can't find again now) that compiled user defined functions as part of a PHP installation.. Although that's not a very good solution because on shared/reseller hosting you can't recompile the PHP installation. I don't want to have conflicts with future versions of PHP / other extensions, i.e. if a function named X by me, is suddenly part of the internal php functions (even though it might not have the same functionality per se) I don't want PHP to throw a fatal error because of this and fail miserably. So the best method that I can think of is to check if a function is defined, using function_exists(), if so throw a notice so that it's easy to track in the log files, otherwise define the function. However that will probably translate to having a lot of include/require statement in other files where I need such a function, which I don't really like. Or possibly, read the directory and loop over each *.func.php file and include_once. Though I find this a bit ugly. The question is, have you ever stumbled upon some source code which handled such a case? How was it implemented? Did you ever do something similar? I need as much ideas as possible! :)

    Read the article

  • Enforce strong type checking in C (type strictness for typedefs)

    - by quinmars
    Is there a way to enforce explicit cast for typedefs of the same type? I've to deal with utf8 and sometimes I get confused with the indices for the character count and the byte count. So it be nice to have some typedefs: typedef unsigned int char_idx_t; typedef unsigned int byte_idx_t; With the addition that you need an explicit cast between them: char_idx_t a = 0; byte_idx_t b; b = a; // compile warning b = (byte_idx_t) a; // ok I know that such a feature doesn't exist in C, but maybe you know a trick or a compiler extension (preferable gcc) that does that. EDIT: I still don't really like the Hungarian notation in general, I couldn't used it for this problem because of project coding conventions, but I used it now in another similar case, where also the types are the same and the meanings are very similar. And I have to admit: it helps. I never would go and declare every integer with a starting "i", but as in Joel's example for overlapping types, it can be life saving.

    Read the article

  • Passing options in autospec with Cucumber in Ruby on Rails Development

    - by TK
    I always run autospec to run features and RSpec at the same time, but running all the features is often time-consuming on my local computer. I would run every feature before committing code. I would like to pass the argument in autospec command. autospec doesn't obviously doesn't accept the arguments directly. Here's the output of autospec -h: autotest [options] options: -h -help You're looking at it. -v Be verbose. Prints files that autotest doesn't know how to map to tests. -q Be more quiet. -f Fast start. Doesn't initially run tests at start. I do have a cucumber.yml in config directory. I also have rerun.txt in the Rails root directory. cucumber -h gives me a lot of information about arguments. How can I run autospec against features that are tagged as @wip? I think I can make use of config/cucumber.yml. There are profile definitions. I can run cucumber -p wip to run only @wip-tagged features, but I'd like to do this with autospec. I would appreciate any tips for working with many spec and feature files.

    Read the article

  • JAVASCRIPT ENABLED [closed]

    - by kirchoffs415
    HI, I hope somebody can help, i keep getting the following message when i log on-- Your Javascript is disabled. Limited functionality is available. it will stay for maybe a day sometimes two.I have uninstalled javascript and reinstalled but still the same. Iam using chrome. any help would be gratefull many thanks Dominic p.s. my system spec is as follows System InformationOS Name Microsoft® Windows Vista™ Home Premium Version 6.0.6002 Service Pack 2 Build 6002 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name DOM-PC System Manufacturer Dell Inc. System Model Inspiron 1545 System Type X86-based PC Processor Pentium(R) Dual-Core CPU T4200 @ 2.00GHz, 2000 Mhz, 2 Core(s), 2 Logical Processor(s) BIOS Version/Date Dell Inc. A05, 25/02/2009 SMBIOS Version 2.4 Windows Directory C:\Windows System Directory C:\Windows\system32 Boot Device \Device\HarddiskVolume3 Locale United Kingdom Hardware Abstraction Layer Version = "6.0.6002.18005" User Name DOM-PC\DOM Time Zone GMT Standard Time Installed Physical Memory (RAM) 3.00 GB Total Physical Memory 2.96 GB Available Physical Memory 1.38 GB Total Virtual Memory 5.89 GB Available Virtual Memory 4.25 GB Page File Space 3.00 GB Page File C:\pagefile.sys My System Specs

    Read the article

  • Error loading shared libraries

    - by user459811
    Hi, I'm running eclipse on Ubuntu using a g++ compiler and I'm trying to run a sample program that utilizes xerces. The build produced no errors however, when i attempted to run the program, I would receive this error: "error while loading shared libraries: libxerces-c-3.1.so: cannot open shared object file: No such file or directory" libxerces-c-3.1.so is in the directory /opt/lib which I have included as a library in eclipse. The file is there when I checked the folder. When I perform an 'echo $LD_LIBRARY_PATH', /opt/lib is also listed. Any ideas into where the problem lies? Thanks. An 'ldd libxerces-c-3.1.so' command yields the following output: linux-vdso.so.1 => (0x00007fffeafff000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fa3d2b83000) libpthread.so.0 => /lib/libpthread.so.0 (0x00007fa3d2966000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007fa3d265f000) libm.so.6 => /lib/libm.so.6 (0x00007fa3d23dc000) libc.so.6 => /lib/libc.so.6 (0x00007fa3d2059000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00007fa3d1e42000) /lib64/ld-linux-x86-64.so.2 (0x00007fa3d337d000)

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • Legitimate uses of the Function constructor

    - by Marcel Korpel
    As repeatedly said, it is considered bad practice to use the Function constructor (also see the ECMAScript Language Specification, 5th edition, § 15.3.2.1): new Function ([arg1[, arg2[, … argN]],] functionBody) (where all arguments are strings containing argument names and the last (or only) string contains the function body). To recapitulate, it is said to be slow, as explained by the Opera team: Each time […] the Function constructor is called on a string representing source code, the script engine must start the machinery that converts the source code to executable code. This is usually expensive for performance – easily a hundred times more expensive than a simple function call, for example. (Mark ‘Tarquin’ Wilton-Jones) Though it's not that bad, according to this post on MDC (I didn't test this myself using the current version of Firefox, though). Crockford adds that [t]he quoting conventions of the language make it very difficult to correctly express a function body as a string. In the string form, early error checking cannot be done. […] And it is wasteful of memory because each function requires its own independent implementation. Another difference is that a function defined by a Function constructor does not inherit any scope other than the global scope (which all functions inherit). (MDC) Apart from this, you have to be attentive to avoid injection of malicious code, when you create a new Function using dynamic contents. Lots of disadvantages and it is intelligible that ECMAScript 5 discourages the use of the Function constructor by throwing an exception when using it in strict mode (§ 13.1). That said, T.J. Crowder says in an answer that [t]here's almost never any need for the similar […] new Function(...), either, again except for some advanced edge cases. So, now I am wondering: what are these “advanced edge cases”? Are there legitimate uses of the Function constructor?

    Read the article

  • Using SVN post-commit hook to update only files that have been commited

    - by fondie
    I am using an SVN repository for my web development work. I have a development site set up which holds a checkout of the repository. I have set up an SVN post-commit hook so that whenever a commit is made to the repository the development site is updated: cd /home/www/dev_ssl /usr/bin/svn up This works fine but due to the size of the repository the updates take a long time (approx. 3 minutes) which is rather frustrating when making regular commits. What I'd like is to change the post-commit hook to only update those files/directories that have been committed but I don't know how to go about doing this. Updating the "lowest common directory" would probably be the best solution, e.g. If committing the follow files: /branches/feature_x/images/logo.jpg /branches/feature_x/css/screen.css It would update the directory: /branches/feature_x/ Can anyone help me create a solution that achieves this please? Thanks! Update: The repository and development site are located on the same server so network issues shouldn't be involved. CPU usage is very low, and I/O should be ok (it's running on hi-spec dedicated server) The development site is approx. 7.5GB in size and contains approx. 600,000 items, this is mainly due to having multiple branches/tags

    Read the article

  • Android ADB has moved and Eclipse is looking in the old place

    - by Peter Nelson
    I did an SDK update last night and it moved adb.exe. In its place it left a file called "adb_has_moved.txt" saying The adb tool has moved to platform-tools/ If you don't see this directory in your SDK, launch the SDK and AVD Manager (execute the android tool) and install "Android SDK Platform-tools" Please also update your PATH environment variable to include the platform-tools/ directory, so you can execute adb from any location. So I did all that, including the PATH and now I can start adb.exe from any DOS prompt. But I still can't start it from Eclipse (Galileo 3.52). When I try it says Location of the Android SDK has not been set up in the preferences ... which is not true. The SDK IS set up in Preferences. The real problem is at the top of the Preferences window where it says "Could not find C:\SDK\android-sdk-windows\ tools \adb.exe!" ...No kidding - the update moved it to C:\SDK\android-sdk-windows\ platform-tools. Because it's specifying a specific (wrong) path Eclipse is bypassing the PATH variable. So how do I get Eclipse to look in the right place?

    Read the article

  • Python: slow read & write for millions of small files

    - by Jami
    I am building directory tree which has tons of subdirectories and files. The total directory count is somewhere along 256^32 subdirectories with 256 files in each end which are only a few bytes long. I did this so I would have fast access to these files (since i'm not searching and i'm just directly accessing then via a known file path) I have a python script that builds this filesystem and reads & writes those files. The problem is that when I reach more than 1Gb of total filesize, the read and write methods become extremely slow. Here's the function I have that reads the contents of a file (the file contains an integer string), adds a certain number to it, then writes it back to the original file. def addInFile(path, scoreToAdd): num = scoreToAdd try: shutil.copyfile(path, '/tmp/tmp.txt') fp = open('/tmp/tmp.txt', 'r') num += int(fp.readlines()[0]) fp.close() except: pass fp = open('/tmp/tmp.txt', 'w') fp.write(str(num)) fp.close() shutil.copyfile('/tmp/tmp.txt', path) I previously tried performing linux console commands but it was slower. I copy the file to a temporary file first then access/modify it then copy it back because i found this was faster than directly accessing the file. I think the cause of the slowdown is because there're tons of files. performing this function 1000 times sometimes reach 1 minute now, but before (when there were only a few files, 1000 calls was performed for only less than 1 second) How do you suggest I fix this?

    Read the article

  • Using CURL within a loop to download a file, 1st one works, 2nd one times out

    - by kitenski
    Morning all, I am using CURL to download an image file within a loop. The first time it runs fine and I see the image appear in the directory. The second time it fails with a timeout, despite it being a valid URL. Can anyone suggest why it always fails on the 2nd time and how to fix it? The snippet of code is: // download image $extension = "gif"; $ch = curl_init(); curl_setopt($ch, CURLOPT_TIMEOUT, 90); curl_setopt($ch, CURLOPT_URL, $imgurl); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); echo $imgurl . " attempting to open URL "; $i = curl_exec($ch); if ( $i==false ) { echo curl_errno($ch).' '.curl_error($ch); } $image_name=time().'.'.$extension; $f = fopen('/fulldirectorypath/' . $image_name ,'w+'); fwrite($f,$i); fclose($f); I have put an echo in there to display the $IMGURL, to check it is valid, and upped the timeout to 90 secs, but it still fails. This is what I see on screen: http://images.eu-xmedia.de/thumbnails/34555861/5676051/pt=pic,lang=2,origfile=yes/image.gif attempting to open URL 28 Operation timed out after 90 seconds with 0 bytes received an empty file is created in my directory. thanks alot, Greg

    Read the article

  • python: help defining/installing simple script to setup machine-specific information

    - by Jason S
    (This is related to scons but I think most of the following should be fairly general to python) I would like to define a python file/library that I put in a Well-Known Place somewhere on my computer that I can use to define machine-specific paths, and was looking for help on how to do this well, since I'm a beginner to Python & really only use it for my scons work. scons uses a SConstruct file which can execute python code. What I would like to do is something like this: My SConstruct file would contain this at the beginning: defaultEnv = JJJJJ.getMachineSpecificPaths() or (do both of these syntaxes work?) import JJJJJ defaultEnv = getMachineSpecificPaths() I define a JJJJJ.py file somewhere installed in the python dir which contains the following def getMachineSpecificPaths(): ... does something here, I don't know what ... that reads a file machine-specific-paths.txt (maybe it has the code Ross Rogers mentioned in my other question) located in the same directory as JJJJJ.py containing the following: machine-specific-paths.txt TI_C28_ROOT C:/appl/ti/ccs/?4.1.1/ccsv4/tools/co?mpiler/c2000 JSDB c:/bin/jsdb/jsdb.exe PYTHON_PATH c:/appl/python/2.6.4 The thing is, I don't really know much about the conventions in Python about where you put system-wide libraries and files. This is probably really simple to get right but I don't know how.

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >