Search Results

Search found 21640 results on 866 pages for 'local storage'.

Page 266/866 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • How to Bind Data and manipulate it in a GridView with MVP

    - by DotNetDan
    I am new to the whole MVP thing and slowly getting my head around it all. The a problem I am having is how to stay consistent with the MVP methodology when populating GridViews (and ddls, but we will tackle that later). Is it okay to have it connected straight to an ObjectDataSourceID? To me this seems wrong because it bypasses all the separation of concerns MVP was made to do. So, with that said, how do I do it? How do I handle sorting (do I send over handler events to the presentation layer, if so how does that look in code)? Right now I have a GridView that has no sorting. Code below. ListCustomers.aspx.cs: public partial class ListCustomers : System.Web.UI.Page, IlistCustomer { protected void Page_Load(object sender, EventArgs e) { //On every page load, create a new presenter object with //constructor recieving the // page's IlistCustomer view ListUserPresenter ListUser_P = new ListUserPresenter(this); //Call the presenter's PopulateList to bind data to gridview ListUser_P.PopulateList(); } GridView IlistCustomer.UserGridView { get { return gvUsers; } set { gvUsers = value; } } } Interface ( IlistCustomer.cs): is this bad sending in an entire Gridview control? public interface IlistCustomer { GridView UserGridView { set; get; } } The Presenter (ListUserPresenter.cs): public class ListUserPresenter { private IlistCustomer view_listCustomer; private GridView gvListCustomers; private DataTable objDT; public ListUserPresenter( IlistCustomer view) { //Handle an error if an Ilistcustomer was not sent in) if (view == null) throw new ArgumentNullException("ListCustomer View cannot be blank"); //Set local IlistCustomer interface view this.view_listCustomer = view; } public void PopulateList() { //Fill local Gridview with local IlistCustomer gvListCustomers = view_listCustomer.UserGridView; // Instantiate a new CustomerBusiness object to contact database CustomerBusiness CustomerBiz = new CustomerBusiness(); //Call CustomerBusiness's GetListCustomers to fill DataTable object objDT = CustomerBiz.GetListCustomers(); //Bind DataTable to gridview; gvListCustomers.DataSource = objDT; gvListCustomers.DataBind(); } }

    Read the article

  • problem with MANIFEST.MF in jar

    - by dhananjay
    I have created my jar file in the following folder: /usr/local/bin/niidle.jar And I have one jarfile which is in the following folder /Projects/EnwelibDatedOct13/Niidle/lib/hector-0.6.0-17.jar And this file 'hector-0.6.0-17.jar' I have to include in MANIFEST.MF in jar. And when I mention class path in MANIFEST.MF as follows: Manifest-Version: 1.0 Main-Class: com.ensarm.niidle.web.scraper.NiidleScrapeManager Class-Path: /Projects/EnwelibDatedOct13/Niidle/lib/hector-0.6.0-17.jar When I run this using command: java -jar /usr/local/bin/niidle.jar It works properly.. But I dont want to give full Class-Path name, I have to give Class-Path as follows: Manifest-Version: 1.0 Main-Class: com.ensarm.niidle.web.scraper.NiidleScrapeManager Class-Path: lib/hector-0.6.0-17.jar And when I run this using command: java -jar /usr/local/bin/niidle.jar It is showing error message: Exception in thread "main" java.lang.NoClassDefFoundError: me/prettyprint/hector/api/Serializer at com.ensarm.niidle.web.scraper.NiidleScrapeManager.main(NiidleScrapeManager.java:21) Caused by: java.lang.ClassNotFoundException: me.prettyprint.hector.api.Serializer at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 1 more Please tell me solution for that...

    Read the article

  • Access a view inside a tab navigator when a tab is clicked

    - by magnus.lassi
    Hi, I I have a view in Flex 3 where I use a tab navigator and a number of views inside the tab navigator. I need to be know which view was clicked because of it's one specific view then I need to take action, i.e. if view with id "secondTab" is clicked then do something. I have set it up to be notified, my problem is that I need to be able to know what view it is. Calling tab.GetChildByName or a similar method seems to only get me back a TabSkin object. <?xml version="1.0" encoding="utf-8"?> <mx:VBox xmlns:mx="http://www.adobe.com/2006/mxml" width="100%" height="100%" xmlns:local="*" creationComplete="onCreationComplete(event)"> <mx:Script> <![CDATA[ import mx.events.FlexEvent; import mx.controls.Button; protected function onCreationComplete(event:Event):void { for(var i:int = 0; i < myTN.getChildren().length; i++) { var tab:Button = myTN.getTabAt(i); tab.addEventListener(FlexEvent.BUTTON_DOWN, tabClickHandler); } } private function tabClickHandler(event:FlexEvent):void { var tab:Button; if(event.currentTarget is Button) { tab = event.currentTarget as Button; // how do I access the actual view hosted in a tab that was clicked? } } ]]> </mx:Script> <mx:TabNavigator id="myTN"> <local:ProductListView id="firstTab" label="First Tab" width="100%" height="100%" /> <local:ProductListView id="secondTab" label="Second Tab" width="100%" height="100%" /> </mx:TabNavigator> </mx:VBox>

    Read the article

  • Can ElementTree be told to preserve the order of attributes?

    - by dmckee
    I've written a fairly simple filter in python using ElementTree to munge the contexts of some xml files. And it works, more or less. But it reorders the attributes of various tags, and I'd like it to not do that. Does anyone know a switch I can throw to make it keep them in specified order? Context for this I'm working with and on a particle physics tool that has a complex, but oddly limited configuration system based on xml files. Among the many things setup that way are the paths to various static data files. These paths are hardcoded into the existing xml and there are no facilities for setting or varying them based on environment variables, and in our local installation they are necessarily in a different place. This isn't a disaster because the combined source- and build-control tool we're using allows us to shadow certain files with local copies. But even thought the data fields are static the xml isn't, so I've written a script for fixing the paths, but with the attribute rearrangement diffs between the local and master versions are harder to read than necessary. This is my first time taking ElementTree for a spin (and only my fifth or sixth python project) so maybe I'm just doing it wrong. Abstracted for simplicity the code looks like this: tree = elementtree.ElementTree.parse(inputfile) i = tree.getiterator() for e in i: e.text = filter(e.text) tree.write(outputfile) Reasonable or dumb? Related links: How can I get the order of an element attribute list using Python xml.sax? Preserve order of attributes when modifying with minidom

    Read the article

  • Minimum privileges to read SQL Jobs using SQL SMO

    - by Gustavo Cavalcanti
    I wrote an application to use SQL SMO to find all SQL Servers, databases, jobs and job outcomes. This application is executed through a scheduled task using a local service account. This service account is local to the application server only and is not present in any SQL Server to be inspected. I am having problems getting information on job and job outcomes when connecting to the servers using a user with dbReader rights on system tables. If we set the user to be sysadmin on the server it all works fine. My question to you is: What are the minimum privileges a local SQL Server user needs to have in order to connect to the server and inspect jobs/job outcomes using the SQL SMO API? I connect to each SQL Server by doing the following: var conn = new ServerConnection { LoginSecure = false, ApplicationName = "SQL Inspector", ServerInstance = serverInstanceName, ConnectAsUser = false, Login = user, Password = password }; var smoServer = new Server (conn); I read the jobs by reading smoServer.JobServer.Jobs and read the JobSteps property on each of these jobs. The variable server is of type Microsoft.SqlServer.Management.Smo.Server. user/password are of the user found in each SQL Server to be inspected. If "user" is SysAdmin on the SQL Server to be inspected all works ok, as well as if we set ConnectAsUser to true and execute the scheduled task using my own credentials, which grants me SysAdmin privileges on SQL Server per my Active Directory membership. Thanks!

    Read the article

  • Git : Failed at pushing to remote server, ' REPOSITORY_PATH ' is not a git command

    - by Judarkness
    I'm using Git with TortoiseGit on Windows XP, and I have a remote bare repository on Windows Vista 64bit version. When I tried to push my local files to remote bare repository, I got the following error message. git.exe push "origin" master:master git: 'C:/Git_Repository/.git' is not a git command. See 'git --help'. fatal: The remote end hung up unexpectedly the arbitrary URL is : username@serverip:C:/Git_Repository/.git The same arbitrary URL worked just fine while doing clone/fetch/pull. Access from a local directory in remote machine to this bare repository has no problem either so I belive there is something wrong with my path. I can push/pull at GitHub correctly but I was using URL provide by GitHub. Does anyone know what's wrong with my configuration? Here is my remote .git/config [core] repositoryformatversion = 0 filemode = false bare = true logallrefupdates = true ignorecase = true hideDotFiles = dotGitOnly Here is my local .git/config [core] repositoryformatversion = 0 filemode = false bare = false logallrefupdates = true symlinks = false ignorecase = true hideDotFiles = dotGitOnly [remote "origin"] fetch = +refs/heads url = username@serverip:C:/Git_Repository/.git [branch "master"] remote = origin merge = refs/heads/master Thanks for the reminding

    Read the article

  • VEMap and a GeoRSS feed(hosted separately)

    - by Alexis Abril
    The scenario is as follows: A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it. A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object). Now, VEMap can accept an input feed in this format via the following: var layer = new VEShapeLayer(); var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer); map.ImportShapeLayerData(veLayerSpec, onComplete, true); onComplete is a callback function I'm using to replace the default pin graphic with something custom. The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format. var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer); When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error. The order of operations is currently: remote feed - local handler - VEMap import If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.

    Read the article

  • Validate domain against LDAP?

    - by lucian.jp
    I have a procedure to get the name of the logged user show on the site. I get it this way : var winIdentity = (WindowsIdentity) HttpContext.Current.User.Identity; if (winIdentity != null) { string domainUser = winIdentity.Name.Replace(@"\", "/"); string domain = winIdentity.Name.Split('\\')[0]; string user = winIdentity.Name.Split('\\')[1]; var myDe = new DirectoryEntry(ConfigurationManager.ConnectionStrings["LDAP"].ConnectionString, ConfigurationManager.AppSettings["LDAPCredentials"].Split(';')[0], ConfigurationManager.AppSettings["LDAPCredentials"].Split(';')[1]); var deSearcher = new DirectorySearcher(myDe) {Filter = "(&(sAMAccountName=" + user + "))"}; SearchResult result = deSearcher.FindOne(); if (result != null) { DirectoryEntry userDe = result.GetDirectoryEntry(); lblNameAD.Text = string.Format(lblNameAD.Text, userDe.Properties["givenName"].Value, userDe.Properties["sn"].Value); } else { var adEntry = new DirectoryEntry("WinNT://" + domainUser); string fullname = adEntry.Properties["FullName"].Value.ToString(); lblNameAD.Text = string.Format(lblNameAD.Text, !string.IsNullOrEmpty(fullname) ? fullname : user, null); } } Probleme id that if I have a local useraccount with the same username that one from LDAP, it passes the check and return the name. EX: local\MyUser domain\MyUser Both return the name from AD even if the one from local isn't a domain account. It would be perfect if I could search in LDAP for domainuser, but it seems I can't. I also tried to restrict the DC with the DirectorySearcher but the domain name is "domain", but I only have "dc=dom" and "dc=com" and no DC for full domain name.

    Read the article

  • How to choose light version of databse system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Expres "I am afraid that my POS devices is poor with hardware for SQLExpres, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data taypes". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Routing trouble for RESTful API - Rails

    - by aressidi
    I'm building out an API for web app that I've been working on for some time. I've started with the User model. The user portion of the API will allow remote clients to a) retrieve user data, b) update user information and c) create new users. I've gotten all of this to work, but it doesn't seem like its setup correctly. Here are my questions: Should the API endpoint be users or user? What's the best practice? I have to add the action to the end, which I would expect to be picked up instead by the request type so I don't have to specify it explicitly. How do I get my routes setup properly as not to have to include the method for protected actions? Let me give some examples: Get request for show - want it to work without the "show" curl -u rmbruno:blah http://app.local/api/users/show Put request for update - want it to work without the "update" curl -X put -F 'user[forum_notifications]=true' -u rmbruno:blah http://app.local/api/users/update Create - works with or without 'create' which is what I want for all these actions curl -X post -F 'user[login]=mamafatta' -F 'user[email][email protected]' -F 'user[password]=12345678' http://twye.local/api/users/ How do I structure routes to not require the action name? Isn't that the common way to to RESTful APIs? Here is my route for the API now: map.namespace :api do |route| route.resources :users route.resources :weight end I'm using restful authentication which is handling the http auth in curl. Any guidance on the routes issues and best practice on singular versus plural would be really helpful. Thanks! -A

    Read the article

  • Delphi App has "No Debug Info" when Debugging

    - by James L.
    We have built an application that uses packages and components. When we debug the application, the "Event Log" in the IDE often shows the our BPLs are being loaded without debug information ("No Debug Info"). This doesn't make sense because all our packages and EXEs are built with debug. _(each project) | Options | Compiling_ [ x ] Assertions [ x ] Debug information [ x ] Local symbols Symbol reference info = "Reference info" [ ] Use debug .dcus [ x ] Use imported data references _(each project) | Options | Linking_ [ x ] Debug information Map file = Detailed We have 4 projects, all built with runtime pacakges: Core.bpl Components.bpl Plugin.bpl (uses both #1 & #2) MainApp.exe (uses #1) Problems Observed 1) Many times when we debug, the Components.bpl is loaded with debug info, but all values in the "Local Variables" window are blank. If you hover your mouse over a variable in the code, there is no popup, and Evaluate window also shows nothing (the "Result" pane is always blank). 2) Sometimes the Event Log shows "No Debug Info" for various BPLs. For instance, if we activate the Plugin.bpl project and set it's Run | Parameter's Host Application to be the MainApp.exe, and then press F9, all modules seems to load with "Has Debug Info" except for the Plugin.bpl module. When it loads, the Event Log shows "No Debug Info". However, if we close the app and immediately press F9, it will run it again without recompiling anything and this time Plugin.bpl is loaded with debug ("Has Debug Info"). Questions 1) What would cause the "Local Variables" window to not display the values? 2) Why are BPLs sometimes loaded without debug info when the BPL was complied with debug and all the debug files (dcu, map, etc.) are available?

    Read the article

  • COM-Objects containing maps / content error(0)

    - by Florian Berzsenyi
    I'm writing a small wrapper to get familiar with some important topics in C++ (WinAPI, COM, STL,OOP). For now, my class shall be able to create a (child) window. Mainly, this window is connected to a global message loop that distribute messages to a local loop of the right instance (global is static, local is virtual). Obviously, there are surely better ways to do that but I'm using std::maps to store HWND and their instance pointer in pairs (the Global loop looks for the pointer with the HWND-parameter, gets itself the pointer from the map and calls then the local loop). Now, it appears that the map does not accept any values because of a unknown reason. It seems to allocate enough space but something went wrong anyway [ (error) 0 is displayed instead of the entries in visual C++). I've looked that up in google as well and found out that maps cause some trouble when used in classes AND DLLs. May this be the reason and is there any solution?? Protected scope of class: static std::map<HWND,MAP_BASE_OBJECT*> m_LoopBuf Implementation in .cpp-file: std::map<HWND,MAP_BASE_OBJECT*> HWindow::m_LoopBuf;

    Read the article

  • nginx error: (99: Cannot assign requested address)

    - by k-g-f
    I am running Ubuntu Hardy 8.04 and nginx 0.7.65, and when I try starting my nginx server: $ sudo /etc/init.d/nginx start I get the following error: Starting nginx: [emerg]: bind() to IP failed (99: Cannot assign requested address) where "IP" is a placeholder for my IP address. Does anybody know why that error might be happening? This is running on EC2. My nginx.conf file looks like this: user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; access_log /usr/local/nginx/logs/access.log; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 3; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } and my /usr/local/nginx/sites-enabled/example.com looks like: server { listen IP:80; server_name example.com; rewrite ^/(.*) https://example.com/$1 permanent; } server { listen IP:443 default ssl; ssl on; ssl_certificate /etc/ssl/certs/myssl.crt; ssl_certificate_key /etc/ssl/private/myssl.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; server_name example.com; access_log /home/example/example.com/log/access.log; error_log /home/example/example.com/log/error.log; }

    Read the article

  • MANIFEST.MF issue

    - by dhananjay
    hi I have created a jar inside this folder: '/usr/local/bin/niidle.jar' in eclipse. And I have another jar inside /usr/local/bin/niidle.jar. In my niidle.jar file,there is one 'lib' folder and in that 'lib' folder,there is another jar file 'hector-0.6.0-17.jar'. I have added this 'hector-0.6.0-17.jar' file in MANIFEST.MF as follows: Manifest-Version: 1.0 Main-Class: com.ensarm.niidle.web.scraper.NiidleScrapeManager Class-Path: hector-0.6.0-17.jar But when I run this using command: >>java -jar /usr/local/bin/niidle.jar arguments... It is not working.. It is showing error message:- Exception in thread "main" java.lang.NoClassDefFoundError: me/prettyprint/hector/api/Serializer at com.ensarm.niidle.web.scraper.NiidleScrapeManager.main(NiidleScrapeManager.java:21) Caused by: java.lang.ClassNotFoundException: me.prettyprint.hector.api.Serializer at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 1 more What is the problem,Please tell me solution for this Exception...

    Read the article

  • proper use of volatile keyword

    - by luke
    I think i have a pretty good idea about the volatile keyword in java, but i'm thinking about re-factoring some code and i thought it would be a good idea to use it. i have a class that is basically working as a DB Cache. it holds a bunch of objects that it has read from a database, serves requests for those objects, and then occasionally refreshes the database (based on a timeout). Heres the skeleton public class Cache { private HashMap mappings =....; private long last_update_time; private void loadMappingsFromDB() { //.... } private void checkLoad() { if(System.currentTimeMillis() - last_update_time > TIMEOUT) loadMappingsFromDB(); } public Data get(ID id) { checkLoad(); //.. look it up } } So the concern is that loadMappingsFromDB could be a high latency operation and thats not acceptable, So initially i thought that i could spin up a thread on cache startup and then just have it sleep and then update the cache in the background. But then i would need to synchronize my class (or the map). and then i would just be trading an occasional big pause for making every cache access slower. Then i thought why not use volatile i could define the map reference as volatile private volatile HashMap mappings =....; and then in get (or anywhere else that uses the mappings variable) i would just make a local copy of the reference: public Data get(ID id) { HashMap local = mappings; //.. look it up using local } and then the background thread would just load into a temp table and then swap the references in the class HashMap tmp; //load tmp from DB mappings = tmp;//swap variables forcing write barrier Does this approach make sense? and is it actually thread-safe?

    Read the article

  • Two way sync with rsync

    - by mwm
    I have a folder a/ and a remote folder A/. I now run something like this on a Makefile: get-music: rsync -avzru server:/media/10001/music/ /media/Incoming/music/ put-music: rsync -avzru /media/Incoming/music/ server:/media/10001/music/ sync-music: get-music put-music when I make sync-music, it first gets all the diffs from server to local and then the opposite, sending all the diffs from local to server. This works very well only if there are just updates or new files on the future. If there are deletions, it doesn't do anything. In rsync there is --delete and --delete-after options to help accomplish what I want but thing is, it doesn't work on a 2-way-sync. If I want to delete server files on a syn, when local files have been deleted, it works, but if, for some reason (explained after) I have some files that aren't in the server but exist locally and they were deleted, I want locally to remove them and not server copied (as it happens). Thing is I have 3 machines in context: desktop notebook home-server So, sometimes, server will have files that were deleted with a notebook sync, for example and then, when I run a sync with my desktop (where the deleted server files still exist on) I want these files to be deleted and not to be copied again to the server. I guess this is only possible with a database and track of operations :P Any simple solutions? Thank you.

    Read the article

  • no such file to load -- rubygems (LoadError)

    - by Vineeth
    Hello, I recently installed rails in fedora 12. I'm new to linux as well. Everything works fine on Windows 7. But I'm facing lot of problems in linux. Help please! I've installed all the essentials to my knowledge to get the basic script/server up and running. I have this error from boot.rb popping up when I try script/server. Some of the details I'd like to give here: The directories where rails, ruby and gem are installed, [vineeth@localhost my_app]$ which ruby /usr/local/bin/ruby [vineeth@localhost my_app]$ which rails /usr/bin/rails [vineeth@localhost my_app]$ which gem /usr/bin/gem And when I run the script/server, this is the error. [vineeth@localhost my_app]$ script/server ./script/../config/boot.rb:9:in `require': no such file to load -- rubygems (LoadError) from ./script/../config/boot.rb:9 from script/server:2:in `require' from script/server:2 And the PATH file looks like this [vineeth@localhost my_app]$ cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH="/usr/local/bin:/usr/local/sbin:/usr/bin/ruby:$PATH" I suppose it is something to do with the PATH file. Let me know what I need to change here. If there are other changes I should make, please let me know, Thanks

    Read the article

  • questions about using svn

    - by ajsie
    i have set up a svn repository in a remote ubuntu server using webdav (http://ip/svn). but i have some questions: should i create one svn repository for each project folder? so i should use "svnadmin create" 5 times for 5 projects? or should i import them to seperate folders in the svn repository? should i after i have created a svn repo, import one project folder into it with "svn import" and then DELETE my original local folder thus only having it in the svn repo in the remote ubuntu server? is this safe or should i still have a local copy for some reason (cause i wont work in that one)? i use netbeans to check out projects from svn repo, then i edit the files and commit the changes. but how should i do to update the live web server content (in /var/www)? should i in my ubuntu server use "svn checkout" and check it out to /var/www or should i use netbeans to check out to a local folder and then upload the files to /var/www with ftp or webdav (and which one of them should i use)? so i got a svn repo and lets say 4 programmers are working with it, checking out and commiting. how do i check what is happening in the repo, which one made which changes, and all the changes from day 1 to day 213? does netbeans/eclipse letting me see this kind of information or do i download another application for it? i should have one svn user for each programmer in the ubuntu server? is this accomplished by using "htpasswd" 4 times for 4 programmers? how do i couple all these users to same group so that i could modify file access specific for the svn group and all its members?

    Read the article

  • IIS site always returns 404 to WinMo emulator

    - by Derick Bailey
    I'm running Win7x64 Ultimate with Visual Studio 2008. I have a website built in ASP.NET 3.5 and hosted via IIS on my box. I can run the website perfectly fine and I can hit all of the web services that I have built in the website, using a web browser. When I pull up my Windows Mobile 6 emulator and hit the site (using my IP address) it always returns a 404 error. I have the emulator cradled w/ Device Emulator Manager and I can interact with the emulated device normally. I am also able to get out to google.com and other websites w/ the emulated device. I have also verified that the emulator is hitting my box by stopping the IIS website and seeing that the WinMo emulator cannot get any response. Then when I start the site again, I get a 404 error. When I pull up my site on my local dev box via FireFox or IE using the IP address it works perfectly fine. The worst part is this worked perfectly fine a few weeks ago, when I used it last. I don't know that I've changed anything since then - I'm just trying to use the emulator to hit my site again. Help?! Update: my http requests comign from the WinMo emulator are not getting logged in the IIS log files, while my requests from FireFox on my local box are getting logged. Not sure if that helps in figuring out the problem... Update 2: I can use the ruby Webbrick server on my local box and hit that server from my emulator just fine. is in IIS not allowing me to hit the IIS site from the emu? UPdate 3: I cradled an actual WinMo device to my box with it's networking turned off and was able to hit the IIS site just fine. that makes me think it's something set up wrong in the emulator.

    Read the article

  • How do I create a new project in TFS from an existing project (breaking history)?

    - by Lindsay
    My team is taking over a project from a previous team. We use a different TFS server than the original team, and we are also not interested in keeping the history of the project because we are accepting the latest version of the code as the beginning of our history with the project. Branching is not an option since we want to start our history from the current version of the code. We just want a fresh project with the existing code. I have not been able to create the new project from the old code successfully. I keep getting an error: "Source control cannot add the solution: Solution would span multiple workspaces" My process for attempting the new project creation: Create a workspace for the previous team's version of the code. Get latest version of that code into local mapped workspace directory Open the solution. Unbind all projects and solution. Close solution. Create a workspace for the new version of the code on our TFS server. Copy the unbound code on my local box to the new local workspace mapped folder. Open the solution from the new directory. "Add to source control" from the new solution. Then I get the error. I have tried removing the TFS security files out of the code directories in the unbound version and tried changing source control instead of adding to source control (but it just binds back to the original instead of letting me bind to the new). Is there any other way to do this besides recreating the solution/projects and adding back all the files and references? It doesn't seem like it should be this difficult... Any advice much appreciated!

    Read the article

  • Read file:// URLs in IE XMLHttpRequest

    - by Dan Fabulich
    I'm developing a JavaScript application that's meant to be run either from a web server (over http) or from the file system (on a file:// URL). As part of this code, I need to use XMLHttpRequest to load files in the same directory as the page and in subdirectories of the page. This code works fine ("PASS") when executed on a web server, but doesn't work ("FAIL") in Internet Explorer 8 when run off the file system: <html><head> <script> window.onload = function() { var xhr = new XMLHttpRequest(); xhr.open("GET", window.location.href, false); xhr.send(null); if (/TestString/.test(xhr.responseText)) { document.body.innerHTML="<p>PASS</p>"; } } </script> <body><p>FAIL</p></body> Of course, at first it fails because no scripts can run at all on the file system; the user is prompted a yellow bar, warning that "To help protect your security, Internet Explorer has restricted this webpage from running scripts or ActiveX controls that could access your computer." But even once I click on the bar and "Allow Blocked Content" the page still fails; I get an "Access is Denied" error on the xhr.open call. This puzzles me, because MSDN says that "For development purposes, the file:// protocol is allowed from the Local Machine zone." This local file should be part of the Local Machine Zone, right? How can I get code like this to work? I'm fine with prompting the user with security warnings; I'm not OK with forcing them to turn off security in the control panel. EDIT: I am not, in fact, loading an XML document in my case; I'm loading a plain text file (.txt).

    Read the article

  • Namespace constants and use as

    - by GordonM
    I'm having some problems with using constants from a namespace. If I define the constant and try to use as it, PHP seems unable to find it. For example, in my file with the constants I have code along the lines of the following: namespace \my\namespace\for\constants; const DS = DIRECTORY_SEPARATOR; Then in the consuming file I have: namespace \some\other\namespace; use \my\namespace\for\constants\DS as DS; echo (realpath (DS . 'usr' . DS 'local')); However, instead of echoing '/usr/local' as expected I get the following notice and an empty string. Notice: Use of undefined constant DS - assumed 'DS' If I change the code as follows: use \my\namespace\for\constants as cns; echo (realpath (cns\DS . 'usr' . cns\DS 'local')); I get the expected result, but it's obviously quite a bit less convenient than just being able to pull the constants in directly. You can alias a class/interface/trait in a namespace, are you not able to alias a constant too? If you can do it, then how?

    Read the article

  • How to choose light version of database system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Express "I am afraid that my POS devices is poor with hardware for SQLExpress, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data types". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • How do I know which include path will be used in PHP?

    - by Joe Majewski
    When I run phpinfo() and look by the Configuration category under PHP Core, I see a directive titled include_path, with a local value and a master value. In this case, my local value is set to .: ./include: ../include: /usr/share/php: /usr/share/php/smarty: /usr/share/pear and my master value is set to .: /usr/share/php: /usr/share/pear: /usr/share/php/pear: /usr/share/php/smarty The reason I am trying to learn how this works is because there is a file in the system I am working on titled Smarty.class.php, which I'm sure sounds very familiar to anyone who uses Smarty Templating Engine. One of the PHP files has the following includes: require_once("Smarty.class.php"); require_once("user_info_class.inc"); The file user_info_class.inc is in the same directory as the file making the include, which makes perfect sense to me, and is the way that I've always referenced files. I decided that I wanted to open up the Smarty.class.php file and had assumed it would be in the same directory, but it was not. After doing a bit of digging, I discovered those php_ini variables, and was finally able to locate the file in the directory usr/share/php/smarty/. So it would seem that when making an include, it follows some sort of order between the Local and Master values for the include_path. Assuming that my deductions were correct thus far, can someone explain the order in which PHP searches for the files to be included?

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >