Search Results

Search found 10106 results on 405 pages for 'fail fast'.

Page 183/405 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Installing FreeNAS 8.3 problems

    - by osij2is
    I'm trying to install FreeNAS 8.3 on some desktop-level hardware (AMD Phenom + 890FX + 16GB) and I've been unsuccessful. I initially tried using a USB stick and followed the instructions on the FreeNAS site here. Making the USB was simple as the instructions laid out, but as soon as the USB is detected (during the boot process) some text appears and quickly vanishes and my machine reboots infinitely. After trying several different was to make the USB, I tried using a DVD-ROM but again, I had the same issue as the USB stick. This leads me to conclude that either a BIOS setting is incorrect but I have no idea which one. I've changed the BIOS to not "fast" boot per se, and I've correctly configured the boot order per USB stick and the DVD-ROM drive so I know that it's working. Have I missed anything that might be causing this problem? I'm not a FreeBSD/FreeNAS expert by any means.

    Read the article

  • How can HAProxy improve availibility, or "how can I prevent my site from going down"? [closed]

    - by Joe Hopfgartner
    I am aware of what HAProxy does, but what if my HAproxy goes down? Or what if my DNS servers go down? Yes, dns is less the problem. However dns only solves to an IP and an IP is announced via BGP to be routed over some router. What if that router goes down? Of course if I have complicated application servers that are likely to fail HAProxy can significantly improve uptime. But my application isnt. In fact my application may very well just be delivering a small static html file via HTTP. Basically if any user anywhere types in MYDOMAIN.COM, I want the user to get SOMETHING on the screen other than a timeout or DNS resolution error. How can I do that? The point of entry is difficult. The so called "initial closure mechanism".

    Read the article

  • Privileged command as part of cronjob

    - by user42756
    Hi, I'm facing a weired problem on a unix-based machine. Here is the story: I have a personal username/password on a unix machine with limited privileges. Whenever I need to execute some commands I have to substitute user using the su command, then I execute it normally. Now, I need to add a cronjob that uses such privileged commands so I added the cronjob on the crontab of the user I substituted to in order to have access to these commands. Strangely, it turned out to me that these commands fail to run for some reason as a cronjob although when I execute them directly from shell (after su) they work seamlessly. Why does this happen? Why do these commands not work as part of cronjobs? Thank you

    Read the article

  • Justification of Amazon EC2 Performance

    - by Adroidist
    I have a .jar file that represents a server which receives over TCP an image in bytes (of size at most 500 kb) and writes it file. It then sobels this image and sends it over TCP socket to the client side. I ran it on my laptop and it was very fast. But when I put it on Amazon EC2 server m1.large instance, i found out it is very slow - around 10 times slower. It might be the inefficiency in the code algorithm but in fact my code is nothing but receive image (like any byte file) run the sobel algorithm and send. I have the following questions: 1- Is it normal performance of Amazon EC2 server- I have read the following links link1 and link2 2- Even if the code is not that efficient, the server is finally handling a very low load (just one client), does the "inefficient" code justify such performance? 3- My laptop is dual core only...Why would the amazon ec2 server have worse performance that my laptop? How is this explained? Excuse me for my ignorance.

    Read the article

  • Second virtual host on Apache redirects to root

    - by Slytherin
    I tried to setup my second virtual host , but I'm getting the default /var/www/index.html ( the one that says "It works!" ) I followed the same procedure as the first time, but this time it didn't work my configuration looks like this <VirtualHost *:80> ServerName messup ServerAlias messup.loc ServerAdmin webmaster@localhost DocumentRoot /var/www/messup ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> my hosts configuration is the following 127.0.0.1 localhost 127.0.1.1 SlytherinPC 127.0.0.1 AFS.loc 127.0.0.1 messup.loc After this , my apache wouldn't restart without any message , only saying [fail] , but stop and start worked. What am I missing ?

    Read the article

  • How to properly start gvfs without gnome?

    - by 9000
    I have a Debian testing box with Xfce (no Gnome, no Nautilus). It has all gvfs-related stuff installed, including all backends and fuse interface. But any attempts to gvfs-mount anything (like sftp://... or smb://...) fail with error opening file: Operation not supported, and gigolo shows only 'unix device (file)' in the list of supported protocols. My ~/.gvfs has rwx permissions, and I'm a member of fuse group; other fuse-related stuff works for me. What do I do? Where to look?

    Read the article

  • Will adding extra RAM in my computer speed it up?

    - by Harry Simpson
    I have a 5 year old Dell Inspiron 530 desktop computer which is slowly grinding to a halt. Someone told me if i put extra RAM in itll speed it up. Inside the computer there are four slots for memory but only two has memory in them and they are 1GB each. if i bought another 2no. 1GB and put them in the free slots would it speed the computer up (would it be twice as fast?) and is it as simple as just putting them in or is there other things i need to do?

    Read the article

  • Lots of artifacts while streaming HD content with VLC 0.9.9 on CentOS

    - by Zsub
    I'm trying to stream (multicast) a x264 encoded file using VLC. This in itself succeeds, but the stream has a huge lot of artifacts. This seems to suggest that the data cannot be transported fast enough. If I check network usage, though, it's only using about 15 mbit. I have a similar SD stream which functions perfectly. I think I could improve stream performance by not streaming the raw data, but I cannot seem to get this working. It seems that on keyframes all artifacts are removed for a short while (less than a second). This is the command I use: vlc -vv hdtest.mkv --sout '#duplicate{dst=rtp{dst=ff02::1%eth1,mux=ts,port=5678,sap,group="Testgroup",name="TeststreamHD"}}' --loop Which is all one long line.

    Read the article

  • Using Windows Azure storage for backup

    - by Bruno
    I am currently looking at Windows Azure blobs as an option for backing up archive data. I want to be able to upload files from an external windows machine via the internet but I don't know enough about Windows Azure storage to make a decision. Some of the questions I have are How do I upload the files. Is there a client application, can I use robocopy? Would it be fast enough? i.e. Could I download or upload 1TB of data in a week? Is it secure? Hopefully someone smarter than me can help me :-)

    Read the article

  • Recover Bios HP DV9700T windows 7 64

    - by petebob796
    I downloaded the latest DV9700t bios (.59) and ran the update tool in windows 7 64. This seemed to go ok and did the usual system will shutdown in 10 seconds dialog. Updon the reboot nothing happens I get no BIOS screen or anything so I think the bios update went wrong. Which I am annoyed about I have never known a bios update to fail before and lots of people on the internet seem to have had the same problem with the same model. There is a way to recover apparently using crisdisk utility and creating a bootable usb key then holding win+b at power on, unfortunately the utility to make the disk doesn't seem to work in 64 bit versions of windows, the only type I have easy access to. Anyone know a way to create the boot disk in 64?

    Read the article

  • C++ templated factory constructor/de-serialization

    - by KRao
    Hi, I was looking at the boost serialization library, and the intrusive way to provide support for serialization is to define a member function with signature (simplifying): class ToBeSerialized { public: //Define this to support serialization //Notice not virtual function! template<class Archive> void serialize(Archive & ar) {.....} }; Moreover, one way to support serilization of derived class trough base pointers is to use a macro of the type: //No mention to the base class(es) from which Derived_class inherits BOOST_CLASS_EXPORT_GUID(Derived_class, "derived_class") where Derived_class is some class which is inheriting from a base class, say Base_class. Thanks to this macro, it is possible to serialize classes of type Derived_class through pointers to Base_class correctly. The question is: I am used in C++ to write abstract factories implemented through a map from std::string to (pointer to) functions which return objects of the desired type (and eveything is fine thanks to covariant types). Hover I fail to see how I could use the above non-virtual serialize template member function to properly de-serialize (i.e. construct) an object without knowing its type (but assuming that the type information has been stored by the serializer, say in a string). What I would like to do (keeping the same nomenclature as above) is something like the following: XmlArchive xmlArchive; //A type or archive xmlArchive.open("C:/ser.txt"); //Contains type information for the serialized class Base_class* basePtr = Factory<Base_class>::create("derived_class",xmlArchive); with the function on the righ-hand side creating an object on the heap of type Derived_class (via default constructor, this is the part I know how to solve) and calling the serialize function of xmlArchive (here I am stuck!), i.e. do something like: Base_class* Factory<Base_class>::create("derived_class",xmlArchive) { Base_class* basePtr = new Base_class; //OK, doable, usual map string to pointer to function static_cast<Derived_class*>( basePtr )->serialize( xmlArchive ); //De-serialization, how????? return basePtr; } I am sure this can be done (boost serialize does it but its code is impenetrable! :P), but I fail to figure out how. The key problem is that the serialize function is a template function. So I cannot have a pointer to a generic templated function. As the point in writing the templated serialize function is to make the code generic (i.e. not having to re-write the serialize function for different Archivers), it does not make sense then to have to register all the derived classes for all possible archive types, like: MY_CLASS_REGISTER(Derived_class, XmlArchive); MY_CLASS_REGISTER(Derived_class, TxtArchive); ... In fact in my code I relies on overloading to get the correct behaviour: void serialize( XmlArchive& archive, Derived_class& derived ); void serialize( TxtArchive& archive, Derived_class& derived ); ... The key point to keep in mind is that the archive type is always known, i.e. I am never using runtime polymorphism for the archive class...(again I am using overloading on the archive type). Any suggestion to help me out? Thank you very much in advance! Cheers

    Read the article

  • Windows Task Scheduler

    - by Zulakis
    i am trying to deploy a auto-starting program with Administrator Priviliges on our XP-SP1-machines. For this, i am using the Windows Task Scheduler. Since most of our machines get deployed by using a PXE-imaging-system, the Task fails because the Administrator user entered is for example r126/Administrator. If i only enter Administrator then it automatically changes to machinename/Administrator. Since the machinenames are automatically changes by the imaging-system, the tasks fail run. Any ideas on how to fix that?

    Read the article

  • Failing RAM, or something else? [closed]

    - by Thanatos
    I have a IBM Thinkpad T43, currently running Windows XP. Programs were crashing, XP was blue-screen-of-deathing, (more than usual) - it was basically unusable, but I couldn't get any informative error out of XP. I booted Ubuntu off a thumbdrive, which made it to the desktop, but as soon as I started to try to do anything, X segfaulted, along with several other services, followed quickly by kernel warnings and a kernel panic. I'm currently running Memtest86+ on this machine, which is spitting out numerous errors. (16k over 3 passes, and counting) The failing areas are numerous, and look something like this: 0001055da4 - XX.X MB, etc. The addresses that fail seem to cluster around 0-20 MB, 250MB, and, more rarely, 750MB, 1000MB, and 1200MB. However, a lot (but not all) of the failing addresses that I've seen end in XXXXXXX?da4 where the ? is a 1 or a 5. The machine has two sticks of RAM, one 512MB, one 1024 MB, the 512MB mapped to the lower addresses, the 1024 MB stick following. Is this indeed RAM failure, or should I consider other things before purchasing more RAM?

    Read the article

  • App to watch installer and roll back host later?

    - by OverTheRainbow
    I'm looking for a Windows application that can watch everything an install programs does to Windows, and can roll it back to what it was like before installing an application. InCtrl5 is useful to know what an installer did, but doesn't provide a way to return the host to its previous state. I'd like to avoid having to restore a host using eg. CloneZilla just for a small application. The goal is to make it fast to test an application in a test lab. Does someone know of an application that can do this? Edit: I wasn't specific enough: I need a way to totally remove an application but keep all other changes I made after installing the application. Edit: After checking a few of them, I settled on Cleanse Uninstaller, which was capable of removing the whole of an application, although it doesn't watch when it's installed.

    Read the article

  • Windows and domain suffix addition

    - by grawity
    I have a DNS domain and host it on my own server. My desktop PC (Windows XP) is configured to have mydomain.tld as its primary DNS suffix. Now, when the system tries to resolve any domain - stackoverflow.com, for example - it tries with the suffix added first, even if the name has periods in it. In other words, it tries stackoverflow.com.mydomain.tld. before stackoverflow.com.. Is this valid according to DNS standards and common sense? Is there anything I can do to prevent it, other than removing the prefix completely? (I still want it to be appended to single-component hostnames. Currently I have two prefixes . and mydomain.tld. configured, but it isn't very fast when resolving foohost.)

    Read the article

  • XDMCP is slow any ideas? (looking for alternative remote desktops)

    - by peteri
    I've been used to using RDP on Windows to remote to machines, and I've got an asus eee 701 which I want to use to do some *nix programming on. While the eee is a lovely little machine the screen and keyboard are a little small to use for lots of programming. I've tried using Xming (the free version) to remote login into the eee from my desktop using XDMCP (or even using a ssh session as a straight X11 server and no desktop on the eee) the whole thing seems seriously slow over wifi the initial desktop takes at least 5 seconds to paint (might even be 10 seconds I haven't actually timed it). So my real question is what do other folks use for remote control with a GUI for their *nix boxes? I am finding it hard to believe the performance is so bad over a wifi network (It makes the Mac IIs I used to use a college in 1988 seem fast) or is this just a problem with Xming and using say the Cygwin X11 server would be better.

    Read the article

  • Why are my DNS Lookups so long (300+ms) when accessing my web site?

    - by Travis
    I'm running a Fedora 11 server with Apache 2. I'm trying to optimize so things are as fast as possible from the server side, and I'm noticing (via Firebug for Firefox) that upon loading the homepage of one of the sites on the web server that for every file it loads (HTML, CSS, JavaScript, GIF, PNG, JPG, etc.), it does a DNS lookup. All of the files it is looking up are local to the server, so I'm surprised to see it even do a DNS lookup. Also, each of these lookups is in the 150-450ms range, which is way too high for my liking. I've tried adjusting /etc/resolve.conf to use Google's Public DNS servers. I restarted the network service and tapped the page again, but the numbers didn't go down. I've reverted back to the default DNS servers since I didn't see any gain. Any ideas on what is causing it to: a) do the dns lookup in the first place, and b) take so long when doing the actual lookup? Thanks in advance.

    Read the article

  • many-to-many-to-many, incl alignment of data from diff sources

    - by JefeCoon
    Re-factoring dbase to support many:many:many. At the second and third levels we need to preserve end-user 'mapping' or aligning of data from different sources, e.g. Order 17 FirstpartyOrderID => aha LineItem_for_BigShinyThingy => AA-1 # maps to 77-a LineItem_for_BigShinyThingy => AA-2 # maps to 77-b, 77-c LineItem_for_LittleWidget => AA-x # maps to 77-zulu, 77-alpha, 99-foxtrot LineItem_for_LittleWidget => AA-y # maps to 77-zulu, 99-foxtrot LineItem_for_LittleWidget => AA-z # maps to 77-alpha ThirdpartyOrderID => foo LineItem_for_BigShinyThingy => 77-a LineItem_for_BigShinyThingy => 77-b LineItem_for_BigShinyThingy => 77-c LineItem_for_LittleWidget => 77-zulu LineItem_for_LittleWidget => 77-alpha ThirdpartyOrderID => bar LineItem_for_LittleWidget => 99-foxtrot Each LineItem has daily datapoints reported from its own source (Firstparty|Thirdparty). In our UI & app we provide tools to align these, then we'd like to save them into the cleanest possible schema for querying, enabling us to diff the reported daily datapoints, and perform other daily calculations (which we'll store in the dbase also, fortunately that should be cake once we've nailed this). We need to map related [firstparty|thirdparty]line_items which have their own respective datapoints. We'll be using the association to pull each line_items collection of datapoints for summary and discrepancy calculations. I'm considering two options, std has_many,through x2 --or-- possibly (scary) ubermasterjoin table OptionA: order<<-->> order_join_table[id,order_id,firstparty_order_id,thirdparty_order_id] <<-->>line_item order_join_table[firstparty_order_id]-->raw_order[id] order_join_table[thirdparty_order_id]-->raw_order[id] raw_order-->raw_line_items[raw_order_id] line_item<<-->> line_item_join[id,LI_join_id,firstparty_LI,thirdparty_LI <<-->>raw_line_items line_item_join[firstparty_LI]-->raw_line_item[id] line_item_join[thirdparty_LI]-->raw_line_item[id] raw_line_item<<-->>datapoints = we rely upon join to store all mappings of first|third orders & line_items = keys to raw_* enable lookup of these order & line_item details = concerns about circular references and/or lack of correct mapping logic, e.g order--line_item--raw_line_items vs. order--raw_order--raw_line_items OptionB: order<<-->> join_master[id,order_id,FP_order_id,TP_order_id,FP_line_item_id,TP_line_item_id] join_master[FP_order_id & TP_order_id]-->raw_order[id] join_master[FP_line_item_id & TP_line_item_id]-->raw_line_item[id] = every combo of FP_line_item + TP_line_item writes a record into the join_master table = "theoretically" queries easy/fast/flexible/sexy At long last, my questions: a) any learnings from painful firsthand experience about how best to implement/tune/optimize many-to-many-to-many relationships b) in rails? c) any painful gotchas (circular references, slow queries, spaghetti-monsters) to watch out for? d) any joy & goodness in Rails3 that makes this magically easy & joyful? e) anyone written the "how to do many-to-many-to-many schema in Rails and make it fast & sexy?" tutorial that I somehow haven't found? If not, I'll follow up with our learnings in the hope it's helpful.. Thanks in advance- --Jeff

    Read the article

  • Drupal 7 on Windows - File Module Problems

    - by TimothyP
    Installed Drupal 7 using the Web Platform installer on Windows 2008 For some reason, the file module, when you upload a file, uses the first few letters of the filename as the unique key to store in the database, which of course causes problems very fast. I'm wondering does anybody have a workaround for this? An AJAX HTTP request terminated abnormally. Debugging information follows. Path: /file/ajax/field_file/und/0/form-EBMatHzV5cZXcWvXJtdADSdyw7Id9-GIpFM_NCJg_a4 StatusText: n/a ResponseText: Error message PDOException: SQLSTATE[23000]: [Microsoft][SQL Server Native Client 10.0][SQL Server]Cannot insert duplicate key row in object 'dbo.file_managed' with unique index 'uri_unique'. in drupal_write_record() (line 6776 of ..........\includes\common.inc). Error The website encountered an unexpected error. Please try again later. ReadyState: undefined (PS: I hope superuser is the right place to ask)

    Read the article

  • Hosts file in Apache keep changing for OS Linux Redhat [on hold]

    - by jack f
    I have installed Apache server. Two clients ex client_1 and client_2. The operation that we are performing on client_1 reflecting to client_2. We have etc/hosts file in our software install location which is keep on changing for client_2 with client_1 IP address. If I correct the entries in hosts file to client_2 also in the next few minuets it is changing automatically to the client_1(if we start the client_1 service). Please explain the use of hosts file and where and when it will change by Apache service. The hosts file in the location /etc/hosts/ for the both clients are same ============================================= Do not remove the following line, or various programs that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost Local LAN 190.0.0.1 client_1.Example.com client_1 190.0.0.2 client_2.Example.com client_2 HR LAN 10.1.74.2 client_1hr peer 10.1.74.3 client_2hr ESP LAN 10.69.69.1 client_1esp 10.69.69.2 client_2esp Any help will be appreciated. Thanks in advance, Jack F

    Read the article

  • Which is the best internet security + Antivirus solution for Windows?

    - by metal gear solid
    Which is the best internet security + Antivirus solution for Windows? free/opensource or commercial it doesn't matter I need best solution. Is Kaspersky best ? or any other? http://www.kaspersky.com/kaspersky_internet_security Award-winning technologies in Kaspersky Internet Security 2010 protect you from cybercrime and a wide range of IT threats: * Viruses, Trojans, worms and other malware, spyware and adware * Rootkits, bootkits and other complex threats * Identity theft by keyloggers, screen capture malware or phishing scams * Botnets and various illegal methods of taking control of your PC or Netbook * Zero-day attacks, new fast emerging and unknown threats * Drive-by download infections, network attacks and intrusions * Unwanted, offensive web content and spam

    Read the article

  • Having trouble storing a CRTP based class in a vector

    - by user366834
    Hi, Im not sure if this can be done, im just delving into templates so perhaps my understanding is a bit wrong. I have a Platoon of soldiers, the platoon inherits from a formation to pick up the formations properties, but because i could have as many formations as i can think of I chose to use the CRTP to create the formations, hoping that i could make a vector or array of Platoon to store the platoons in. but, of course, when i make a Platoon, it wont store it in the vector, "types are unrelated" Is there any way around this ? i read about "Veneers" which are similar and that they work with arrays but i cant get it to work, perhaps im missing something. here's some code: (sorry about the formatting, the code is here in my post but its not showing up for some reason ) template < class TBase > class IFormation { public : ~IFormation(){} bool IsFull() { return m_uiMaxMembers == m_uiCurrentMemCount; } protected: unsigned int m_uiCurrentMemCount; unsigned int m_uiMaxMembers; IFormation( unsigned int _uiMaxMembers ): m_uiMaxMembers( _uiMaxMembers ), m_uiCurrentMemCount( 0 ){} // only allow use as a base class. void SetupFormation( std::vector<MySoldier*>& _soldierList ){}; // must be implemented in derived class }; ///////////////////////////////////////////////////////////////////////////////// // PHALANX FORMATION class Phalanx : public IFormation<Phalanx> { public: Phalanx( ): IFormation( 12 ), m_fDistance( 4.0f ) {} ~Phalanx(){} protected: float m_fDistance; // the distance between soldiers void SetupFormation( std::vector<MySoldier*>& _soldierList ); }; /////////////////////////////////////////////////////////////////////////////////// // COLUMN FORMATINO class Column : public IFormation< Column > { public : Column( int _numOfMembers ): IFormation( _numOfMembers ) {} ~Column(); protected: void SetupFormation( std::vector<MySoldier*>& _soldierList ); }; I then use these formations in the platoon class to derive, so that platoon gets the relevant SetupFormation() function: template < class Formation > class Platoon : public Formation { public: **** platoon code here }; everything works great and as expected up until this point. now, as my general can have multiple platoons, I need to store the platoons. typedef Platoon< IFormation<> > TPlatoon; // FAIL typedef std::vector<TPlatoon*> TPlatoons; TPlatoon m_pPlatoons m_pPlatoons.push_back( new Platoon<Phalanx> ); // FAIL, types unrelated. typedef Platoon< IFormation< TPlatoon; fails because i need to specify a template parameter, yet specifying this will only allow me to store platoons created with the same template parameter. so i then created FormationBase class FormationBase { public: virtual bool IsFull() = 0; virtual void SetupFormation( std::vector<MySoldier*>& _soldierList ) = 0; }; and made IFormation publicly inherit from that, and then changed the typedef to typedef Platoon< IFormation< FormationBase > > TPlatoon; but still no love. now in my searches i have not found info that says this is possible - or not possible.

    Read the article

  • Why does Mac OS X sometimes complain that a copy failed because a file is in use?

    - by orj
    Recently I've been copying files from DVDs to network storage on my Mac running Leopard 10.5.7. I'm just dragging and dropping in Finder to perform the copy. Occasionally the copy will fail with a dialog complaining that a file is in use. If I repeat the copy generally it completes successfully. I could understand this being a problem if one was trying to move a file and it was open by another app. But none of these files are open in other apps. I just pop the DVD in, drag and drop the files to my NAS's network share and sometimes it fails with the "file in use" error. This is very annoying. Anyone have any ideas?

    Read the article

  • Linux using the link command

    - by Xavier
    Here it goes. I have a folder that contains a not so large amount of space called /data/backup but I have been told that if I link that folder (/data/backup) to an even bigger folder area like /bigdata/backup for example, that I will be able to execute backups to the /data/backup folder because it will be just a link but the data will be seen in both folders and the latter one (/bigdata/backup) will contain the backup results but it will show on both folders and since the /bigdata/backup has far more disk space then the backup will no longer fail because of space problems in the /data/backup one. Is this true? Thanks Xav

    Read the article

  • Private Git repo using Smart HTTP with LDAP authentification

    - by ALOToverflow
    I've been crawling the interwebz and getting my hands dirty for the last few days, but I can't seem to make it all work together. I managed to get a HTTP repo working with Ubuntu 10.04 over Smart HTTP (pull and push over HTTP) for a single repo. This means that I do the initial setup over SSH to the server (git init --bare) and after that the clients can pull and push to it (git clone http://servername/allgitrepos/repo.git). Unfortunately it's impossible to add a new repo without SSHing to the server and adding it manually) i.e. git push http://servername/allgitrepos/repo2.git (allgitrepos is available for everyone to read-write and execute) would fail talking about git update-server-info (which seems to be a general error message). So far the repository is anonymous, so I would like to authenticate using LDAP and also use the LDAP creds to make the git commit. So, how can I push new repos to the server and how can I use the LDAP creds to make the git commit. Thanks

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >