Search Results

Search found 18475 results on 739 pages for 'auth to local'.

Page 203/739 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Symfony fk issue on insertion

    - by Daniel Hertz
    Hi, I posted a similar problem but it could not be resolved. I create a relational database of users and groups but for some reason I cannot insert test data with fixtures properly. Here is a sample of the schema: User: actAs: { Timestampable: ~ } columns: name: { type: string(255), notnull: true } email: { type: string(255), notnull: true, unique: true } nickname: { type: string(255), unique: true } password: { type: string(300), notnull: true } image: { type: string(255) } Group: actAs: { Timestampable: ~ } columns: name: { type: string(500), notnull: true } image: { type: string(255) } type: { type: string(255), notnull: true } created_by_id: { type: integer } relations: User: { onDelete: SET NULL, class: User, local: created_by_id, foreign: id, foreignAlias: groups_created } FanOf: actAs: { Timestampable: ~ } columns: user_id: { type: integer, primary: true } group_id: { type: integer, primary: true } relations: User: { onDelete: CASCADE, local: user_id, foreign: id, foreignAlias: fanhood } Group: { onDelete: CASCADE, local: group_id, foreign: id, foreignAlias: fanhood } And this is the data i try to input: User: user1: name: Danny email: [email protected] nickname: danny password: f05050400c5e586fa6629ef497be Group: group1: name: Mets type: sports FanOf: fans1: user_id: user1 group_id: group1 I keep getting this error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (`krowdd`.`fan_of`, CONSTRAINT `fan_of_user_id_user_id` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`) ON DELETE CASCADE) The users and groups are clearly being created before the "fanhood" is so why am I getting this error?? Thanks!

    Read the article

  • Vagrant - Failed login, ssh not set up

    - by motleydev
    This question is two fold because somewhere in my attempts to solve the problem, I created a new one. First: I was trying to vagrant up using a Vagranfile based on the standard hashicorp/precise32 box. Everything worked up until default: SSH auth method: private key where it would eventually time out. Enabling gui in the Vagrantfile showed that the machine never actually logs in. I can use the standard user/pass and log in from that point but the vagrant up process still remains at that prior status. Here's where my understanding might be a little dim. I've tried setting auth method from the insecure_pass_key to my root ~/.ssh/id_rsa or whichever one I wanted to use. I'm not entirely sure where to put a copy of the public key or my authorized keys file. I've got a .vagrant.d folder in my user dir (I'm on OSX) which seems to contain the box images. I've got a .vagrant folder in my directory with the Vagrantfile which seems to contain the specific machine I am building off of. I've tried pouring over the docs and forums but I seem to be missing a key concept here. And Now: After a host of tips/tricks such as rolling back by VirtualBox install, uninstalling/reinstalling Vagrant and VritualBox several times, when I try to run vagrant ssh-config with a Vagrantfile based on the same hashicorp/precise32 it says that the box is not enabled for SSH. Specifically, this error: The provider for this Vagrant-managed machine is reporting that it is not yet ready for SSH. Depending on your provider this can carry different meanings. Make sure your machine is created and running and try again. Additionally, check the output of `vagrant status` to verify that the machine is in the state that you expect. If you continue to get this error message, please view the documentation for the provider you're using. So now I am slightly up a creek. Any help would be appreciated if not just clarifying a concept. Some pertinent info: I'm on OSX Maverics Due to the fact I'm running a dual HD system with system files on one HD and user files on another, my permissions are a little wonky and VBoxManage will only let me run commands via Sudo - not sure if it's pertinent - but maybe. I have no idea what I'm doing. That part is perhaps more important.

    Read the article

  • How to pull one commit at a time from a remote git repository?

    - by Norman Ramsey
    I'm trying to set up a darcs mirror of a git repository. I have something that works OK, but there's a significant problem: if I push a whole bunch of commits to the git repo, those commits get merged into a single darcs patchset. I really want to make sure each git commit gets set up as a single darcs patchset. I bet this is possible by doing some kind of git fetch followed by interrogation of the local copy of the remote branch, but my git fu is not up to the job. Here's the (ksh) code I'm using now, more or less: git pull -v # pulls all the commits from remote --- bad! # gets information about only the last commit pulled -- bad! author="$(git log HEAD^..HEAD --pretty=format:"%an <%ae>")" logfile=$(mktemp) git log HEAD^..HEAD --pretty=format:"%s%n%b%n" > $logfile # add all new files to darcs and record a patchset. this part is OK darcs add -q --umask=0002 -r . darcs record -a -A "$author" --logfile="$logfile" darcs push -a rm -f $logfile My idea is Try git fetch to get local copy of the remote branch (not sure exactly what arguments are needed) Somehow interrogate the local copy to get a hash for every commit since the last mirroring operation (I have no idea how to do this) Loop through all the hashes, pulling just that commit and recording the associated patchset (I'm pretty sure I know how to do this if I get my hands on the hash) I'd welcome either help fleshing out the scenario above or suggestions about something else I should try. Ideas?

    Read the article

  • How to Bind Data and manipulate it in a GridView with MVP

    - by DotNetDan
    I am new to the whole MVP thing and slowly getting my head around it all. The a problem I am having is how to stay consistent with the MVP methodology when populating GridViews (and ddls, but we will tackle that later). Is it okay to have it connected straight to an ObjectDataSourceID? To me this seems wrong because it bypasses all the separation of concerns MVP was made to do. So, with that said, how do I do it? How do I handle sorting (do I send over handler events to the presentation layer, if so how does that look in code)? Right now I have a GridView that has no sorting. Code below. ListCustomers.aspx.cs: public partial class ListCustomers : System.Web.UI.Page, IlistCustomer { protected void Page_Load(object sender, EventArgs e) { //On every page load, create a new presenter object with //constructor recieving the // page's IlistCustomer view ListUserPresenter ListUser_P = new ListUserPresenter(this); //Call the presenter's PopulateList to bind data to gridview ListUser_P.PopulateList(); } GridView IlistCustomer.UserGridView { get { return gvUsers; } set { gvUsers = value; } } } Interface ( IlistCustomer.cs): is this bad sending in an entire Gridview control? public interface IlistCustomer { GridView UserGridView { set; get; } } The Presenter (ListUserPresenter.cs): public class ListUserPresenter { private IlistCustomer view_listCustomer; private GridView gvListCustomers; private DataTable objDT; public ListUserPresenter( IlistCustomer view) { //Handle an error if an Ilistcustomer was not sent in) if (view == null) throw new ArgumentNullException("ListCustomer View cannot be blank"); //Set local IlistCustomer interface view this.view_listCustomer = view; } public void PopulateList() { //Fill local Gridview with local IlistCustomer gvListCustomers = view_listCustomer.UserGridView; // Instantiate a new CustomerBusiness object to contact database CustomerBusiness CustomerBiz = new CustomerBusiness(); //Call CustomerBusiness's GetListCustomers to fill DataTable object objDT = CustomerBiz.GetListCustomers(); //Bind DataTable to gridview; gvListCustomers.DataSource = objDT; gvListCustomers.DataBind(); } }

    Read the article

  • Can ElementTree be told to preserve the order of attributes?

    - by dmckee
    I've written a fairly simple filter in python using ElementTree to munge the contexts of some xml files. And it works, more or less. But it reorders the attributes of various tags, and I'd like it to not do that. Does anyone know a switch I can throw to make it keep them in specified order? Context for this I'm working with and on a particle physics tool that has a complex, but oddly limited configuration system based on xml files. Among the many things setup that way are the paths to various static data files. These paths are hardcoded into the existing xml and there are no facilities for setting or varying them based on environment variables, and in our local installation they are necessarily in a different place. This isn't a disaster because the combined source- and build-control tool we're using allows us to shadow certain files with local copies. But even thought the data fields are static the xml isn't, so I've written a script for fixing the paths, but with the attribute rearrangement diffs between the local and master versions are harder to read than necessary. This is my first time taking ElementTree for a spin (and only my fifth or sixth python project) so maybe I'm just doing it wrong. Abstracted for simplicity the code looks like this: tree = elementtree.ElementTree.parse(inputfile) i = tree.getiterator() for e in i: e.text = filter(e.text) tree.write(outputfile) Reasonable or dumb? Related links: How can I get the order of an element attribute list using Python xml.sax? Preserve order of attributes when modifying with minidom

    Read the article

  • problem with MANIFEST.MF in jar

    - by dhananjay
    I have created my jar file in the following folder: /usr/local/bin/niidle.jar And I have one jarfile which is in the following folder /Projects/EnwelibDatedOct13/Niidle/lib/hector-0.6.0-17.jar And this file 'hector-0.6.0-17.jar' I have to include in MANIFEST.MF in jar. And when I mention class path in MANIFEST.MF as follows: Manifest-Version: 1.0 Main-Class: com.ensarm.niidle.web.scraper.NiidleScrapeManager Class-Path: /Projects/EnwelibDatedOct13/Niidle/lib/hector-0.6.0-17.jar When I run this using command: java -jar /usr/local/bin/niidle.jar It works properly.. But I dont want to give full Class-Path name, I have to give Class-Path as follows: Manifest-Version: 1.0 Main-Class: com.ensarm.niidle.web.scraper.NiidleScrapeManager Class-Path: lib/hector-0.6.0-17.jar And when I run this using command: java -jar /usr/local/bin/niidle.jar It is showing error message: Exception in thread "main" java.lang.NoClassDefFoundError: me/prettyprint/hector/api/Serializer at com.ensarm.niidle.web.scraper.NiidleScrapeManager.main(NiidleScrapeManager.java:21) Caused by: java.lang.ClassNotFoundException: me.prettyprint.hector.api.Serializer at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 1 more Please tell me solution for that...

    Read the article

  • VEMap and a GeoRSS feed(hosted separately)

    - by Alexis Abril
    The scenario is as follows: A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it. A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object). Now, VEMap can accept an input feed in this format via the following: var layer = new VEShapeLayer(); var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer); map.ImportShapeLayerData(veLayerSpec, onComplete, true); onComplete is a callback function I'm using to replace the default pin graphic with something custom. The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format. var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer); When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error. The order of operations is currently: remote feed - local handler - VEMap import If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.

    Read the article

  • Git : Failed at pushing to remote server, ' REPOSITORY_PATH ' is not a git command

    - by Judarkness
    I'm using Git with TortoiseGit on Windows XP, and I have a remote bare repository on Windows Vista 64bit version. When I tried to push my local files to remote bare repository, I got the following error message. git.exe push "origin" master:master git: 'C:/Git_Repository/.git' is not a git command. See 'git --help'. fatal: The remote end hung up unexpectedly the arbitrary URL is : username@serverip:C:/Git_Repository/.git The same arbitrary URL worked just fine while doing clone/fetch/pull. Access from a local directory in remote machine to this bare repository has no problem either so I belive there is something wrong with my path. I can push/pull at GitHub correctly but I was using URL provide by GitHub. Does anyone know what's wrong with my configuration? Here is my remote .git/config [core] repositoryformatversion = 0 filemode = false bare = true logallrefupdates = true ignorecase = true hideDotFiles = dotGitOnly Here is my local .git/config [core] repositoryformatversion = 0 filemode = false bare = false logallrefupdates = true symlinks = false ignorecase = true hideDotFiles = dotGitOnly [remote "origin"] fetch = +refs/heads url = username@serverip:C:/Git_Repository/.git [branch "master"] remote = origin merge = refs/heads/master Thanks for the reminding

    Read the article

  • login faild when autorize dovecot from pam

    - by eyadof
    I installed dovecot 2.0.13 with postfix in ubuntu server 11.10 , after installation I can send e-mail with the mail command and dovecot works when I test it with telnet I then installed roundcube on it , and the installation passed all tests. I want to authorize dovecot with system users through pam so I wrote in my dovecot.conf : passwd pam { args = * } and in pam.d/dovecot I wrote : auth required pam_unix.so nullok account required pam_unix.so then when I reload dovecot and try to login it still fails. So how can I solve this ?

    Read the article

  • Access a view inside a tab navigator when a tab is clicked

    - by magnus.lassi
    Hi, I I have a view in Flex 3 where I use a tab navigator and a number of views inside the tab navigator. I need to be know which view was clicked because of it's one specific view then I need to take action, i.e. if view with id "secondTab" is clicked then do something. I have set it up to be notified, my problem is that I need to be able to know what view it is. Calling tab.GetChildByName or a similar method seems to only get me back a TabSkin object. <?xml version="1.0" encoding="utf-8"?> <mx:VBox xmlns:mx="http://www.adobe.com/2006/mxml" width="100%" height="100%" xmlns:local="*" creationComplete="onCreationComplete(event)"> <mx:Script> <![CDATA[ import mx.events.FlexEvent; import mx.controls.Button; protected function onCreationComplete(event:Event):void { for(var i:int = 0; i < myTN.getChildren().length; i++) { var tab:Button = myTN.getTabAt(i); tab.addEventListener(FlexEvent.BUTTON_DOWN, tabClickHandler); } } private function tabClickHandler(event:FlexEvent):void { var tab:Button; if(event.currentTarget is Button) { tab = event.currentTarget as Button; // how do I access the actual view hosted in a tab that was clicked? } } ]]> </mx:Script> <mx:TabNavigator id="myTN"> <local:ProductListView id="firstTab" label="First Tab" width="100%" height="100%" /> <local:ProductListView id="secondTab" label="Second Tab" width="100%" height="100%" /> </mx:TabNavigator> </mx:VBox>

    Read the article

  • Validate domain against LDAP?

    - by lucian.jp
    I have a procedure to get the name of the logged user show on the site. I get it this way : var winIdentity = (WindowsIdentity) HttpContext.Current.User.Identity; if (winIdentity != null) { string domainUser = winIdentity.Name.Replace(@"\", "/"); string domain = winIdentity.Name.Split('\\')[0]; string user = winIdentity.Name.Split('\\')[1]; var myDe = new DirectoryEntry(ConfigurationManager.ConnectionStrings["LDAP"].ConnectionString, ConfigurationManager.AppSettings["LDAPCredentials"].Split(';')[0], ConfigurationManager.AppSettings["LDAPCredentials"].Split(';')[1]); var deSearcher = new DirectorySearcher(myDe) {Filter = "(&(sAMAccountName=" + user + "))"}; SearchResult result = deSearcher.FindOne(); if (result != null) { DirectoryEntry userDe = result.GetDirectoryEntry(); lblNameAD.Text = string.Format(lblNameAD.Text, userDe.Properties["givenName"].Value, userDe.Properties["sn"].Value); } else { var adEntry = new DirectoryEntry("WinNT://" + domainUser); string fullname = adEntry.Properties["FullName"].Value.ToString(); lblNameAD.Text = string.Format(lblNameAD.Text, !string.IsNullOrEmpty(fullname) ? fullname : user, null); } } Probleme id that if I have a local useraccount with the same username that one from LDAP, it passes the check and return the name. EX: local\MyUser domain\MyUser Both return the name from AD even if the one from local isn't a domain account. It would be perfect if I could search in LDAP for domainuser, but it seems I can't. I also tried to restrict the DC with the DirectorySearcher but the domain name is "domain", but I only have "dc=dom" and "dc=com" and no DC for full domain name.

    Read the article

  • Fetching real time data from excel

    - by Umesh Sharma
    I am seriouly looking for your valuable help first time here. If possible, plese help me. I am developing a VB.NET app in which i read "real time data" from a excel sheet using "Microsoft.Office.Interop.Excel" i.e. excel automation. All cells in excel sheet are fetching stock data from some LOCAL DDE Server like "=XYZ|Bid!GOLD", "=XYZ|Bid!SILVER", "=XYZ|Ask!SILVER" and so on... Some cells also having fixed values like "Symbol", "Bid Rate", "32.90" etc. Values of DDE mapped cells (i.e. =XYZ|xxxx!yyy) are continuously changing. THE PROBLEM is here..."FIXED values" from excel cells are coming quite ok to my app but all DDE mapped cells values are coming "-2146826246" (When datasource local dde server ON) or "-2146826265" (OFF). Although, if i use C#.NET, it's all ok but not with Vb.NET. I want to display range of excel (A1 to J50) into VB.NET ListView which are changing in every 200ms (5 times in every 1 second) ================ Important ====================================================== Is it possible to BIND "listview items/columns values" with "excel cells" or some local memory variables ?? Currently, i am reading excel "cell by cell" and trying to put values in .NET listview but CPU USES are very high as well as it's toooo slow process. If yes, then how please ? I am a VFP developer but new to .NET It's very easy in VFP then why not in .NET ?? Please guide me, if someone has the solution...

    Read the article

  • Minimum privileges to read SQL Jobs using SQL SMO

    - by Gustavo Cavalcanti
    I wrote an application to use SQL SMO to find all SQL Servers, databases, jobs and job outcomes. This application is executed through a scheduled task using a local service account. This service account is local to the application server only and is not present in any SQL Server to be inspected. I am having problems getting information on job and job outcomes when connecting to the servers using a user with dbReader rights on system tables. If we set the user to be sysadmin on the server it all works fine. My question to you is: What are the minimum privileges a local SQL Server user needs to have in order to connect to the server and inspect jobs/job outcomes using the SQL SMO API? I connect to each SQL Server by doing the following: var conn = new ServerConnection { LoginSecure = false, ApplicationName = "SQL Inspector", ServerInstance = serverInstanceName, ConnectAsUser = false, Login = user, Password = password }; var smoServer = new Server (conn); I read the jobs by reading smoServer.JobServer.Jobs and read the JobSteps property on each of these jobs. The variable server is of type Microsoft.SqlServer.Management.Smo.Server. user/password are of the user found in each SQL Server to be inspected. If "user" is SysAdmin on the SQL Server to be inspected all works ok, as well as if we set ConnectAsUser to true and execute the scheduled task using my own credentials, which grants me SysAdmin privileges on SQL Server per my Active Directory membership. Thanks!

    Read the article

  • How to get rid of Gmail's on "behalf of" using postfix

    - by user2815
    I'm using the default configuration of postfix on Ubuntu 9.04, and I've been trying to configure Gmail to send email through my server. I'm looking for a simple configuration for 10-15 users (like using a password file), but all the tutorials I have found have been too extensive and seem very enterprise-oriented. I just need to configure postfix with AUTH/TLS that is compatible with gmail.

    Read the article

  • Routing trouble for RESTful API - Rails

    - by aressidi
    I'm building out an API for web app that I've been working on for some time. I've started with the User model. The user portion of the API will allow remote clients to a) retrieve user data, b) update user information and c) create new users. I've gotten all of this to work, but it doesn't seem like its setup correctly. Here are my questions: Should the API endpoint be users or user? What's the best practice? I have to add the action to the end, which I would expect to be picked up instead by the request type so I don't have to specify it explicitly. How do I get my routes setup properly as not to have to include the method for protected actions? Let me give some examples: Get request for show - want it to work without the "show" curl -u rmbruno:blah http://app.local/api/users/show Put request for update - want it to work without the "update" curl -X put -F 'user[forum_notifications]=true' -u rmbruno:blah http://app.local/api/users/update Create - works with or without 'create' which is what I want for all these actions curl -X post -F 'user[login]=mamafatta' -F 'user[email][email protected]' -F 'user[password]=12345678' http://twye.local/api/users/ How do I structure routes to not require the action name? Isn't that the common way to to RESTful APIs? Here is my route for the API now: map.namespace :api do |route| route.resources :users route.resources :weight end I'm using restful authentication which is handling the http auth in curl. Any guidance on the routes issues and best practice on singular versus plural would be really helpful. Thanks! -A

    Read the article

  • How to secure a directory in Apache using a PHP session

    - by Cogsy
    I have a site that uses PHP session for authentication. There is one directory that I would like to restrict access to that does not use any PHP, it's just full of static content. I just don't know how to restrict access without every request going through a PHP script. Is there some way to have Apache check the session credentials and restrict access like Basic Auth?

    Read the article

  • How to choose light version of databse system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Expres "I am afraid that my POS devices is poor with hardware for SQLExpres, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data taypes". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • COM-Objects containing maps / content error(0)

    - by Florian Berzsenyi
    I'm writing a small wrapper to get familiar with some important topics in C++ (WinAPI, COM, STL,OOP). For now, my class shall be able to create a (child) window. Mainly, this window is connected to a global message loop that distribute messages to a local loop of the right instance (global is static, local is virtual). Obviously, there are surely better ways to do that but I'm using std::maps to store HWND and their instance pointer in pairs (the Global loop looks for the pointer with the HWND-parameter, gets itself the pointer from the map and calls then the local loop). Now, it appears that the map does not accept any values because of a unknown reason. It seems to allocate enough space but something went wrong anyway [ (error) 0 is displayed instead of the entries in visual C++). I've looked that up in google as well and found out that maps cause some trouble when used in classes AND DLLs. May this be the reason and is there any solution?? Protected scope of class: static std::map<HWND,MAP_BASE_OBJECT*> m_LoopBuf Implementation in .cpp-file: std::map<HWND,MAP_BASE_OBJECT*> HWindow::m_LoopBuf;

    Read the article

  • proper use of volatile keyword

    - by luke
    I think i have a pretty good idea about the volatile keyword in java, but i'm thinking about re-factoring some code and i thought it would be a good idea to use it. i have a class that is basically working as a DB Cache. it holds a bunch of objects that it has read from a database, serves requests for those objects, and then occasionally refreshes the database (based on a timeout). Heres the skeleton public class Cache { private HashMap mappings =....; private long last_update_time; private void loadMappingsFromDB() { //.... } private void checkLoad() { if(System.currentTimeMillis() - last_update_time > TIMEOUT) loadMappingsFromDB(); } public Data get(ID id) { checkLoad(); //.. look it up } } So the concern is that loadMappingsFromDB could be a high latency operation and thats not acceptable, So initially i thought that i could spin up a thread on cache startup and then just have it sleep and then update the cache in the background. But then i would need to synchronize my class (or the map). and then i would just be trading an occasional big pause for making every cache access slower. Then i thought why not use volatile i could define the map reference as volatile private volatile HashMap mappings =....; and then in get (or anywhere else that uses the mappings variable) i would just make a local copy of the reference: public Data get(ID id) { HashMap local = mappings; //.. look it up using local } and then the background thread would just load into a temp table and then swap the references in the class HashMap tmp; //load tmp from DB mappings = tmp;//swap variables forcing write barrier Does this approach make sense? and is it actually thread-safe?

    Read the article

  • Is there a way to apply a GPO to all but selective users? (SBS 2008)

    - by CandyCo
    I've created a GPO in SBS 2008 that deploys and updates software. Unfortunately, one of our VPN users lives out in the sticks and has severe latency, so the start up processes and updates time out and take an awfully long time, if they ever complete at all. I'd like to apply this GPO to all auth'd users except for him, without having to create a new custom user group. Any thoughts?

    Read the article

  • SSH Advanced Logging

    - by Radek Šimko
    I've installed OpenSUSE on my server and want to set ssh to log every command, which is send to system over it. I've found this in my sshd_config: # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH #LogLevel INFO I guess that both of those directives has to be uncommented, but I'd like to log every command, not only authorization (login/logout via SSH). I just want to know, if someone breaks into my system, what did he do.

    Read the article

  • Delphi App has "No Debug Info" when Debugging

    - by James L.
    We have built an application that uses packages and components. When we debug the application, the "Event Log" in the IDE often shows the our BPLs are being loaded without debug information ("No Debug Info"). This doesn't make sense because all our packages and EXEs are built with debug. _(each project) | Options | Compiling_ [ x ] Assertions [ x ] Debug information [ x ] Local symbols Symbol reference info = "Reference info" [ ] Use debug .dcus [ x ] Use imported data references _(each project) | Options | Linking_ [ x ] Debug information Map file = Detailed We have 4 projects, all built with runtime pacakges: Core.bpl Components.bpl Plugin.bpl (uses both #1 & #2) MainApp.exe (uses #1) Problems Observed 1) Many times when we debug, the Components.bpl is loaded with debug info, but all values in the "Local Variables" window are blank. If you hover your mouse over a variable in the code, there is no popup, and Evaluate window also shows nothing (the "Result" pane is always blank). 2) Sometimes the Event Log shows "No Debug Info" for various BPLs. For instance, if we activate the Plugin.bpl project and set it's Run | Parameter's Host Application to be the MainApp.exe, and then press F9, all modules seems to load with "Has Debug Info" except for the Plugin.bpl module. When it loads, the Event Log shows "No Debug Info". However, if we close the app and immediately press F9, it will run it again without recompiling anything and this time Plugin.bpl is loaded with debug ("Has Debug Info"). Questions 1) What would cause the "Local Variables" window to not display the values? 2) Why are BPLs sometimes loaded without debug info when the BPL was complied with debug and all the debug files (dcu, map, etc.) are available?

    Read the article

  • no such file to load -- rubygems (LoadError)

    - by Vineeth
    Hello, I recently installed rails in fedora 12. I'm new to linux as well. Everything works fine on Windows 7. But I'm facing lot of problems in linux. Help please! I've installed all the essentials to my knowledge to get the basic script/server up and running. I have this error from boot.rb popping up when I try script/server. Some of the details I'd like to give here: The directories where rails, ruby and gem are installed, [vineeth@localhost my_app]$ which ruby /usr/local/bin/ruby [vineeth@localhost my_app]$ which rails /usr/bin/rails [vineeth@localhost my_app]$ which gem /usr/bin/gem And when I run the script/server, this is the error. [vineeth@localhost my_app]$ script/server ./script/../config/boot.rb:9:in `require': no such file to load -- rubygems (LoadError) from ./script/../config/boot.rb:9 from script/server:2:in `require' from script/server:2 And the PATH file looks like this [vineeth@localhost my_app]$ cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH="/usr/local/bin:/usr/local/sbin:/usr/bin/ruby:$PATH" I suppose it is something to do with the PATH file. Let me know what I need to change here. If there are other changes I should make, please let me know, Thanks

    Read the article

  • questions about using svn

    - by ajsie
    i have set up a svn repository in a remote ubuntu server using webdav (http://ip/svn). but i have some questions: should i create one svn repository for each project folder? so i should use "svnadmin create" 5 times for 5 projects? or should i import them to seperate folders in the svn repository? should i after i have created a svn repo, import one project folder into it with "svn import" and then DELETE my original local folder thus only having it in the svn repo in the remote ubuntu server? is this safe or should i still have a local copy for some reason (cause i wont work in that one)? i use netbeans to check out projects from svn repo, then i edit the files and commit the changes. but how should i do to update the live web server content (in /var/www)? should i in my ubuntu server use "svn checkout" and check it out to /var/www or should i use netbeans to check out to a local folder and then upload the files to /var/www with ftp or webdav (and which one of them should i use)? so i got a svn repo and lets say 4 programmers are working with it, checking out and commiting. how do i check what is happening in the repo, which one made which changes, and all the changes from day 1 to day 213? does netbeans/eclipse letting me see this kind of information or do i download another application for it? i should have one svn user for each programmer in the ubuntu server? is this accomplished by using "htpasswd" 4 times for 4 programmers? how do i couple all these users to same group so that i could modify file access specific for the svn group and all its members?

    Read the article

  • Two way sync with rsync

    - by mwm
    I have a folder a/ and a remote folder A/. I now run something like this on a Makefile: get-music: rsync -avzru server:/media/10001/music/ /media/Incoming/music/ put-music: rsync -avzru /media/Incoming/music/ server:/media/10001/music/ sync-music: get-music put-music when I make sync-music, it first gets all the diffs from server to local and then the opposite, sending all the diffs from local to server. This works very well only if there are just updates or new files on the future. If there are deletions, it doesn't do anything. In rsync there is --delete and --delete-after options to help accomplish what I want but thing is, it doesn't work on a 2-way-sync. If I want to delete server files on a syn, when local files have been deleted, it works, but if, for some reason (explained after) I have some files that aren't in the server but exist locally and they were deleted, I want locally to remove them and not server copied (as it happens). Thing is I have 3 machines in context: desktop notebook home-server So, sometimes, server will have files that were deleted with a notebook sync, for example and then, when I run a sync with my desktop (where the deleted server files still exist on) I want these files to be deleted and not to be copied again to the server. I guess this is only possible with a database and track of operations :P Any simple solutions? Thank you.

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >