Search Results

Search found 23949 results on 958 pages for 'test test'.

Page 560/958 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • openldap proxied authorization

    - by bemace
    I'm having some trouble doing updates with proxied authorization (searches seem to work fine). I'm using UnboundID's LDAP SDK to connect to OpenLDAP, and sending a ProxiedAuthorizationV2RequestControl for dn: uid=me,dc=People,dc=example,dc=com with the update. I've tested and verified that the target user has permission to perform the operation, but I get insufficient access rights when I try to do it via proxy auth. I've configured olcAuthzPolicy=both in cn=config and authzTo={0}ldap:///dc=people,dc=example,dc=com??subordinate?(objectClass=inetOrgPerson) on the original user. The authzTo seems to be working; when I change it I get not authorized to assume identity when I try the update (also for searches). Can anyone suggest what else I should look at or how I could get more detailed errors from OpenLDAP? Anything else I can test to narrow down the source of the problem?

    Read the article

  • How to troubleshoot a remote wmi query/access failure?

    - by Roman
    I'm using Powershell to query a remote computer in a domain for a wmi object, eg: "gwmi -computer test -class win32_bios". I get this error message: Value does not fall within the expected range Executing the query local under the same user works fine. It seems to happen on both windows 2003 and also 2008 systems. The user that runs the shell has admin rights on the local and remote server. I checked wmi and dcom permissions as far as I know how to do this, they seem to be the same on a server where it works, and another where it does not. I think it is not a network issue, all ports are open that are needed, and it also happens within the same subnet. When sniffing the traffic we see the following errors: RPC: c/o Alter Cont Resp: Call=0x2 Assoc Grp=0x4E4E Xmit=0x16D0 Recv=0x16D0 Warning: GssAPIMechanism is not found, either caused by not reassembled, conversation off or filtering. And an errormessage from Kerberos: Kerberos: KRB_ERROR - KDC_ERR_BADOPTION (13) The option code in the packet is 0x40830000 Any idea what I should look into?

    Read the article

  • Why, when on Kubuntu I lose internet connection, am I unable to reconnect?

    - by Jonathan
    Using Kubuntu 11.10. Sony Vaio computer. Network controller: Intel Corporation WiFi Link 5100. If I connect to a wireless network, and the signal drops, then I am unable to connect to any network without a reboot. I can assure that the issue has nothing to do with the computer going to sleep, as I have experienced the above while using my computer continuously. Here is exactly what happens: Connect to network (at University, where the connection is not so great). The connection is broken There are three other possible networks available, but none of them can be connected to. I have tried off and on sometimes for hours. I am always able to reestablish a connection after a reboot. I can only think of two explanations. The first is that a temp file is corrupted when the internet connection is abruptly dropped. The second is that my computer actually corrupts something before the loss of internet connection, which causes the loss in signal, and inability to reconnect. However, I am not confident that my explanations are complete, nor do I have any idea how to test these things.

    Read the article

  • routing traffic between two network cards through firewall

    - by RubyFreak
    I'm trying to test a network device (firewall) using a Linux box, with two network cards, one interface connected to the WAN zone and another interface to LAN zone. The configuration is similar with that |ETH0| <-> | FW | <-> ETH1 So from both interfaces I'm able to ping the respective firewall interface. But i'm not able to fire something like: ping -I eth0 ip.from.eth1 and to get any answer. Is that possible or should the linux network namespace solution or user level tcp stacks (VMs are out of question)

    Read the article

  • Boot failure randomly. Can someone help?

    - by desgua
    I often get stuck at the boot on battery. I can suspend and hibernate without any trouble (even on battery). If I plug the energy cable everything goes right most of the time (edited June 13). Adding "acpi=off" doesn't solve the issue. Disabling power save from LAN at BIOS doesn't solve this too. Memory test seems to be ok: I can even compile a kernel without a problem: Has someone a tip for this? (I have the same questions marks in my mind as the image shows if not more :-/ ) obs.: this is an upgraded installation from Natty alpha. edited I (May 22): I've installed a brand new final 11.04 and got the bug again. edited II (June 13): I thought it was solved but it is not. After spending eight days off, the same problem happened even with the power cable connected. I had to reboot about 6 times until success. edited III (June 14): I can boot (even with battery) if I disconnect the cable and take off the battery for a few seconds. This lead me to conclude that maybe something is kept into memory of some hardware. Maybe or maybe not related but I also have touchpad issues (jumps) with this machine. edited IV (June 25): I opened gconf-editor and went to apps gnome-power-manager and disabled everything possibly related to suspend or hibernate. No help. Also I grabbed a Fedora live usb pendrive and got the same problem: So I think it is a hardware problem related issue.

    Read the article

  • Wordpress blog website apache and IIS subdirectory

    - by Philippe
    The issue is this : Our company has a website hosted on an IIS server. I have recently been given the task to configure a WordPress server for an eventual WordPress blog so that our social media employee could test and see how it works. This was completed successfully and easily on a new server and on a WAMP configuration. The website was published as wordpress.domain.com and works fine. HOWEVER! I have now been requested to ensure that the soon-to-go-online blog would be accessible through the address domain.com/blog. Is there a way to modify the original company website and simple redirect the /blog to the Apache WordPress website? If not, is there a way to transfer the wordpress.domain.com on the IIS server hosting the main website and keep the configuration? Is there a better solution that I haven't thought about (probably)? If so, what would you all suggest?

    Read the article

  • Can I nohup/screen an already-started process?

    - by ojrac
    I'm doing some test-runs of long-running data migration scripts, over SSH. Let's say I start running a script around 4 PM; now, 6 PM rolls around, and I'm cursing myself for not doing this all in screen. Is there any way to "retroactively" nohup a process, or do I need to leave my computer online all night? If it's not possible to attach screen to/nohup a process I've already started, then why? Something to do with how parent/child proceses interact? (I won't accept a "no" answer that doesn't at least address the question of 'why' -- sorry ;) )

    Read the article

  • What is the basic loadout for an open source web developer?

    - by DeveloperDon
    Thus far, I have mainly been an embedded developer, but I am interested in having the flexibility to do mobile and web development as well. I think my tools should include the following, but probably a lot more. LAMP stack. Java IDEs like Eclipse and IntelliJ. JS frameworks like Dojo, Node.JS, AngularJS, (is it better to mix or commit to one?). Cloud solutions like EC2 and Azure (again, ok to mix or better to commit to one?). Google APIs. Continuous integration server. Source control tools with Git for new work, SVN, CVS, +others for imports. FTP server. Unit test runners. Bug trackers. OOAD modeling tools or plug-ins? Graphic design tools? Hosting services. XML / JSON / other markup? Content management, SEO? I am also interested to know if there are tools where it might be better to mix, match, or support all available (maybe for source control) and others where the full focus should be on one (maybe Java vs. C# or Windows vs. Linux vs. MacOS). Perhaps some of these questions need context of whether the projects will be greenfield (just pick favorite) or maintenance (no choice, each project continues legacy, sometimes with a poor tools).

    Read the article

  • Forwarding emails to nonexisting users/aliases to external mail server

    - by Niclas Lindqvist
    I'm in the works of installing a postfix mailserver on a machine currently being used as web server. As of right now, I've got it working as far as that I can send and receive emails using telnet through port 25. However, as my customer is concerned with downtime, I'd like to setup all the accounts one by one over time, and making sure it works, rather than just cutting the cord to the old mail server and start creating new accounts on the new system. How can I add the domain customer.com to my mail server and just add something like [email protected] to the users and aliases-lists without the web server trying to send all emails with the @customer.com domain to the new mail server? I'm running ubuntu, using postfix and postfix-mysql on the new machine, the old mail server is on a hosted environment somewhere else, where I don't have any control.

    Read the article

  • Is there a better way to consume an ASP.NET Web API call in an MVC controller?

    - by davidisawesome
    In a new project I am creating for my work I am creating a fairly large ASP.NET Web API. The api will be in a separate visual studio solution that also contains all of my business logic and database interactions, Model classes as well. In the test application I am creating (which is asp.net mvc4), I want to be able to hit an api url I defined from the control and cast the return JSON to a Model class. The reason behind this is that I want to take advantage of strongly typing my views to a Model. This is all still in a proof of concept stage, so I have not done any performance testing on it, but I am curious if what I am doing is a good practice, or if I am crazy for even going down this route. Here is the code on the client controller: public class HomeController : Controller { protected string dashboardUrlBase = "http://localhost/webapi/api/StudentDashboard/"; public ActionResult Index() //This view is strongly typed against User { //testing against Joe Bob string adSAMName = "jBob"; WebClient client = new WebClient(); string url = dashboardUrlBase + "GetUserRecord?userName=" + adSAMName; //'User' is a Model class that I have defined. User result = JsonConvert.DeserializeObject<User>(client.DownloadString(url)); return View(result); } . . . } If I choose to go this route another thing to note is I am loading several partial views in this page (as I will also do in subsequent pages). The partial views are loaded via an $.ajax call that hits this controller and does basically the same thing as the code above: Instantiate a new WebClient Define the Url to hit Deserialize the result and cast it to a Model Class. So it is possible (and likely) I could be performing the same actions 4-5 times for a single page. Is there a better method to do this that will: Let me keep strongly typed views. Do my work on the server rather than on the client (this is just a preference since I can write C# faster than I can write javascript).

    Read the article

  • SHH Tunnel for Remote Desktop via Intermediary Server

    - by Mihai Todor
    I've seen many examples of SSH tunnels on the nets, but I'm still having no luck with this. Here's the setup: Windows 7 PC in a private network, sitting behind a firewall, with PowerShellInsider SSH server set up and working fine. Public access Linux server, which has access to the PC. Windows 7 laptop, at home, from which I wish to do remote desktop on the PC. Now, here's what I've tried so far: SSH tunnel from my laptop to the Linux server: ssh -f my_user@LINUX_SERVER -L 6666:LINUX_SERVER_IP:6666 -N SSH to the Linux server where I've set up a tunnel to the PC: ssh -f 'PRIVATE_DOMAIN\my_user'@PC_NAME -L 6666:PC_IP:3389 -N Unfortunately, I must be doing something wrong, because it doesn't seem to work. Any ideas why or, at least, any suggestions on how can I try to debug this setup? At the moment, I have access to all 3 machines (non-root on Linux), so I can test whatever I want...

    Read the article

  • Where is php executable on Ubuntu?

    - by user601L
    I have installed apache and php. I know php works as I have tested a simple php file on apache server. I'm writing a simple webserver which should be able to process php files. So what I want to do is once I get a request for a php file, something like 'exec php test.php' and get the output and pass it to the client. As I'm not much into Ubuntu, I don't know where is the php executable (should be in \bin right?) to do it. But there is no php file inside \bin or \usr\bin. When I run 'which php' it shows nothing. How do I do this?

    Read the article

  • Hybrid Graphics on Ubuntu 12.04 switching to discrete

    - by cfstras
    I have a Sony Vaio VPCCB-27FX with hybrid graphics. Using vgaswitcheroo enables me to switch my discrete card off to save power. Now when i want to switch to the discrete card for performance, my system freezes. I already tried logging out and killing x with service lightdm stop, but still, it freezes as soon as I echo DIS > switch. typing blindly, echo IGD > switch returns me to my console where it reads [ 179.555171] i915: switched off, but it seems the discrete card never gets switched on... running echo DDIS > switch gives me the following: [540....] [drm:atop_op_jump] *ERROR* atombios stuck in loop for more than 5secs aborting [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing CEE2 (len 62, WS 0, PS 0) @ 0xCEFE [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BBF6 (len 1036, WS 4, PS 0) @ 0xBCF3 [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BB8C (len 76, WS 0, PS 0) @ 0xBB94 [541....] [drm:r600_RING_TEST] *ERROR* radeon: ring test failed (scratch(0x8504)=0xFFFFFFFF) [541....] [drm:evergreen_resume] *ERROR* evergreen startup failed on resume after that, the atombios part repeats a few times. also, the terminal locks up again and sysrq+REISUB is my only rescue. Has anybody an idea how I can switch to my discrete card without the system locking up? #uname -srvmpio Linux 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux #lsb_release -r Description: Ubuntu 12.04 LTS

    Read the article

  • Is there a way to create a consistent snapshot/SnapMirror across multiple volumes?

    - by Tomer Gabel
    We use a NetApp FAS 6-series filer with an application that spans multiple volumes. For backup purposes I would like to create a consistent snapshot that spans these volumes at the same point in time (or at least with an extremely low delta); additionally, we'd like to to use SnapMirror to replace the production environment to test volumes. The problem is in creating a consistent snapshot/SnapMirror, since these commands are not transactional and do not take multiple parameters. I tried scripting consecutive "snap create" or "snapmirror resync" commands via SSH, but there's always a 0.5-2 second difference between each snapshot. It's currently "good enough", but I'm seriously concerned about the consistency impact with increased load (we're currently in pre-production). Has anyone managed to create a consistent snapshot that spans several volumes? How did you pull it off?

    Read the article

  • Why my new GTX 660m's clock drops drastically after running few seconds

    - by trVoldemort
    I bought a Lenovo Y580 laptop few days ago, this model is equipped with GTX 660m graphics card. However, the game performance is unbelievably poor since it out from the box. I realized there is something wrong with this graphics card. I downloaded GPU-z, and did a simple test. And I was shocked by the fact that my GTX 660m graphics card is running at 135.0mhz core clock. (It should be 835mhz at least!) Even the integrated graphics card "Intel HD graphics 4000" can run at 650mhz. Further examining showed that in the first few seconds GTX 660m was actually running at 835mhz, however the core temperature quickly reached 90+°C and the clock (maybe) automatically drop to 135.0mhz. This is very strange. Anyone has any idea what's going on here?

    Read the article

  • You Can&rsquo;t Upload An Empty File To SharePoint 2007 Or SharePoint 2010

    - by Brian Jackett
    The title of this post is pretty self explanatory, but I thought it worth mentioning since I had never run across this rule until just recently.  A few weeks ago I was testing out a new workflow attached to a SharePoint 2007 document library.  I uploaded various file types to ensure all were handled properly.  One of the files I happened to test with was an empty .txt file to which I got the following error.      As you can see from the error message you aren’t allowed to upload a file that is empty.  Fast forward to this week when I was doing some research for my upcoming SharePoint 2010 beta exams.  I remembered that error I got a few weeks ago and decided to try out with SharePoint 2010 as well.  No surprises I got a similar error. Conclusion     Next time you are uploading files to a SharePoint 2007 or 2010 document library, make sure the file is not empty.  Coincidentally when I tweeted about this issue a few friends replied that they had also found this error recently.  I don’t know the internal reasoning why this is prevented but I assume it has something to do with how the blob for the file is stored in the database.  I assume that this would still be the case even if you had Remote Blob Storage (RBS) configured for your farm, but don’t have access to such a farm to confirm.  If anyone reading this does have access and wants to confirm that would be appreciated, just leave a comment.         -Frog Out

    Read the article

  • Browser privacy improvement implications for websites

    - by phq
    On https://panopticlick.eff.org/ EFF let you test the number of uniquely identifying bits that the browser gives a website. Among these are HTTP header fields such as User-Agent, Accept, Accept-Language and later perhaps ETAG and If-Modified-Since. Also there is a lot of Information that javascript can get from the browser such as time-zone, screen resolution, complete list of fonts and plugins available. My first impression is, is all this information really usable/used on a majority of all websites? For example, how many sites does really send different content-types depending on the http accept header, or what fonts are available(I thought css had taken care of this)? Let's say of these headers/js functionality on day would be gone. Which ones would; never be noticed they were gone? impact user experience? impact server performance? immediately reimplemented because the Internet cannot work without it? Extra credit for differentiating between what can be done, what should be done and what is done in most situations.

    Read the article

  • Should I Use PHP as FastCGI?

    - by Synetech inc.
    Hi, I am running an Apache webserver on my Windows machine. It is not generally a public server (most of the little bit of traffic comes from the machine itself, and most of the public traffic comes from crawlers). Basically, it is mostly just for use as a test-bed, development system. I have read about how running PHP as FastCGI is better (ie faster and more stable) than as an Apache module. However, I really don’t like the idea of multiple PHP.exe processes (I don’t like that Apache has two processes and I’m not even too thrilled with Chromium’s multi-process model). So I’m wondering if it would be worthwhile to change PHP to FastCGI for this scenario. If it is, how would I configure it? Pretty much all of the information I have seen has been either for non-Windows or for IIS. As I said, I’m running Windows+Apache. Thanks a lot.

    Read the article

  • Adding HTTPS capability to WAMPSERVER 2

    - by abel
    I have WampServer 2 installed on my WinXP Pro SP3 box, Apache 2.2.11 with ssl module enabled, which runs the comnpanies intranet website. http://www.akadia.com/services/ssh_test_certificate.html gives some pointers of generating a self signed certificate. But I encounter a error while running through the example openssl genrsa -des3 -out server.key 1024 where openssl.exe is located under C:\wamp\bin\apache\Apache2.2.11\bin The error code that gets generated is 4828:error:02001015:system library:fopen:Is a directory:.\crypto\bio\bss_file.c: 126:fopen('d:/test/openssl098kvc6/openssl.cnf','rb') 4828:error:2006D002:BIO routines:BIO_new_file:system lib:.\crypto\bio\bss_file.c :131: 4828:error:0E078002:configuration file routines:DEF_LOAD:system lib:.\crypto\con f\conf_def.c:199: Where am I going wrong?

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • Why can't get more speed on iperf on windows xp

    - by SledgehammerPL
    I test my bandwith and throughput using iperf (jperf) on desktop PC with WinXP. I can't get more than 3Mbit/s outside until I change TCP Window size - about 84Kb is ok. but I can't force XP to use this value by default.. I try very many magic spells on Registry, use many TCP Optimisers - but nothing works. I will accept that that everything is ok, when I reboot the PC, run iperf and will see 18.1Mbit - like my Linux box standing very near my Windows XP Box. Is it possible?

    Read the article

  • What tools can be used to monitor a web application? Beyond "doesn't 404"

    - by Freiheit
    I have an internal web application that has recently gone through a major version upgrade. I would like to monitor this application over the weekend and look for 'soft' errors. I will still need to spot check things by hand, but there are some common failure patterns that I think I can automate. Examples include data with bad formatting, blank rows in tables (indicates missing non-critical data), patterns for identifiers ("TEST" means one of my devs left a testing feed on), etc. I think there are applications out there that can be scripted to do things like: 1. log in 2. Go to $URL 3. select 3rd link in $LIST or $PATTERN 4. Check HTML from that link for $PATTERNS 5. Email report Are these goals sane? What applications/tools can help with this?

    Read the article

  • Blocking a country (mass iP Ranges), best practice for the actual block

    - by kwiksand
    Hi all, This question has obviously been asked many times in many different forms, but I can't find an actual answer to the specific plan I've got. We run a popular European Commercial deals site, and are getting a large amount of incoming registrations/traffic from countries who cannot even take part in the deals we offer (and many of the retailers aren't even known outside Western Europe). I've identified the problem area to block a lot of this traffic, but (as expected) there are thousands of ip ranges required. My question now (finally!). On a test server, I created a script to block each range within iptables, but the amount of time it took to add the rules was large, and then iptables was unresponsive after this (especially when attempting a iptables -L). What is the most efficient way of blocking large numbers of ip ranges: iptables? Or a plugin where I can preload them efficiantly? hosts.deny? .htaccess (nasty as I'd be running it in apache on every load balanced web server)? Cheers

    Read the article

  • Tips about how to spread Object Oriented practices

    - by Augusto
    I work for a medium company that has around 250 developers. Unfortunately, lots of them are stuck in a procedural way of thinking and some teams constantly deliver big Transactional Script applications, when in fact the application contains rich logic. They also fail to manage the design dependencies, and end up with services which depend on another large number of services (a clean example of Big Ball of Mud). My question is: Can you suggest how to spread this type of knowledge? I know that the surface of the problem is that these applications have a poor architecture and design. Another issue is that there are some developers who are against writing any kind of test. A few things I'm doing to change this (but I'm either failing or the change is too small are) Running presentations about design principles (SOLID, clean code, etc). Workshops about TDD and BDD. Coaching teams (this includes using sonar, findbugs, jdepend and other tools). IDE & Refactoring talks. A few things I'm thinking to do in the future (but I'm concern that they might not be good) Form a team of OO evangelists, who disseminate an OO way of thinking in differet teams (these people would need to change teams every few months). Running design review sessions, to criticise the design and suggest improvements (even if the improvements are not done because of time constraints, I think this might be useful) . Something I found with the teams I coach, is that as soon as I leave them, they revert back to the old practices. I know I don't spend a lot of time with them, usually just one month. So whatever I'm doing, it doesn't stick. I'm sorry this question is spattered with frustration, but the alterative to write this was to hit my head on the wall until I pass out.

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >