Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 522/880 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • The Simplicity of the Oracle Stack

    - by user801960
    For many retailers, technology is something they know they need to optimise business operations, but do they really understand it and how can they select the solutions they need from the many vendors on the market? Retail is a data heavy industry, with the average retailer managing thousands of SKUs and hundreds of categories through multiple channels. Add to this the exponential growth in data driven by social media and mobile activities, and the process can seem overwhelming. Handling data of this magnitude and analyzing it effectively to gain actionable insight is a huge task, and needs several IT components to work together harmoniously to make the best use of the data available and make smarter decisions. With this in mind, Oracle has produced a video to make it easier for businesses to understand its global data IT solutions and how they integrate seamlessly with Oracle’s other solutions to enable organisations to operate as effectively as possible. The video uses an orchestra as an analogy for IT solutions and clever illustration to demonstrate the value of the Oracle brand. This video can be viewed at http://medianetwork.oracle.com/video/player/1622148401001. To find out more about how Oracle’s products and services can help retailers to deliver better results, visit the Oracle Retail website.

    Read the article

  • How to handle editing a large file for a non-technical user

    - by Luke
    I have a client who is given a tab delimited .txt file containing hundreds of thousands of rows. I have a user story as follows: As a user I want to take the text file and add a new value at the end of each line which contains the concatenated value of two of the columns. for example if the file read text_one text_two I need to output the following (preferably to a .txt file) text_one text_two text_onetext_two My first approach was to ask the vendor supplying the file to do the concatenation before providing the file, the easiest way to solve a problem is to eliminate it right? however they are very uncooperative and have point blank refused. I've looked at building a simple javascript application that does this client side so a non-technical user could select the file using a file selector. This approach has a few problems The file could be over a GB in size and so can't be loaded straight into memory, I've tried and the browser crashes There is no means to write a file in javascript so I'd need to output the content to the screen and have the user save it (somehow) I was thinking if I could get around the filesize limitations I could just output the edited content to the page and have the user save the page as a .txt file, however I think there is a better way than using javascript that will still accommodate the users lack of technical know-how. Please consider this question to be stack agnostic, but bear in mind that a nice little shell script or python script would be deemed unsuitable for a non technical user unless there is a way of "packaging" it nicely for a non-technical user. Updates The file is too large to open in excel. The process needs to be run weekly, but it doesn't require scheduling or automation...(yet)

    Read the article

  • Inspiring the method of teaching. Example- C++ :)

    - by Ashwin
    A year ago I graduated with a degree in Computer Science and Engineering. Considering C++ as the first choice of programming language I have been in the process of learning C++ in many ways. At first - five years back - I had many conceptions, most of which were so abstract to me. It started when I knew almost everything about Structs in C and nothing about Classes in C++. I went through a great time experimenting them all and learning a lot. I had a hard time evaluating Procedural programming vs Object-Oriented Programming. Deciding when to choose Procedural or Object-Oriented Programming took a great deal of patience for me. I knew that I cannot underestimate any of these Programming styles... Though Procedural programming is often a better choice than simple sequential unstructured programming, when solving problems with procedural programming, we usually divide one problem into several steps in order regarded as functions. Then we call these functions one by one to get the result of the problem. When solving problems with Object Oriented Priciples we divide one problem into several classes and form the interaction between them. Evaluating these two at the beginning (as a learner) required a lot of inspiration and thoughts. Instructing to think step by step. Relative concepts to understand deeply. Intensive interests to contrast both solving in both POP and OOP. If you were ever a mentor: What ideas/methods would you teach to students in which it will Inspire them to learn a programming language (in general, computer sciences)?

    Read the article

  • How do I overclock a dual core processor?

    - by Pankaj Bhardwaj
    This is a serious problem. I want to speed up my compilation process. I have never been especially fond of overclocking, however my current compilation time for my project makes my crazy! I found information about which parameters I should change to overclock and which values are safe. However, I have one basic problem. I don't have the ability to change this option in my BIOS. It is possible that these options have been blocked by the system administrators? Getting a faster processor is not an option. How do I edit the proper values to overclock my CPU. Do I need to flash my bios to another version? My specs: Entium Dual-Core Cpu E5400 @2.7GHz Asrock motherboard 4GB RAM Windows 7 Ultimate 32-bit

    Read the article

  • How to parse JSON data from web more faster [closed]

    - by Kaidul Islam Sazal
    I have json inventory inventory.json on the server like this: [ { "body" : "SUV", "color" : { "ext" : "White diamond pearl", "int" : "Taupe" }, "id" : "276181", "make" : "Acura", "miles" : 35949, "model" : "RDX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JNBD/001_0292.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.3L DOHC PGM-FI 16-VALVE", "trans" : "Automatic" }, "price" : { "net" : 29488 }, "stock" : "6942", "trim" : "AWD 4dr Tech Pkg SUV", "vin" : "5J8TB2H53BA000334", "year" : 2011 }, { "body" : "Sedan", "color" : { "ext" : "Premium white pearl", "int" : "Taupe" }, "id" : "275622", "make" : "Acura", "miles" : 40923, "model" : "TSX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JMC6/001_1765.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.4L L4 MPI DOHC 16V", "trans" : "Automatic" }, "price" : { "net" : 22288 }, "stock" : "6945", "trim" : "4dr Sdn I4 Auto Sedan", "vin" : "JH4CU2F66AC011933", "year" : 2010 } ] here are two index, There are almost 5000 index like this. I parsed this json like this: var url = "inventory/inventory.json"; $.getJSON(url, function(data){ $.each(data, function(index, item){ //straight-forward loop if(item.year == 2012) { $('#desc').append(item.make + ' ' + item.model + ' ' + '<br/>' + item.price.net + '<br/>' + item.pic[0].full); } }); }); This is working fine.But the problem is that, this searching and fetching process is little bit slow as there are 5000 indexes already and it's increasing day by day. It seems that, it is a straight-forward loop to parse the data and a normal brute-force method. Now I want to know if there any time efiicient way to parse more faster.Any faster method to parse instead of straight-forward loop ?

    Read the article

  • HP-UX -> Linux incremental remote backup

    - by stack_zen
    Hi. I've the need to setup a differential backup process from a range of remote HP-UXes to a central RHEL5 server. I'd happily go with rsync, problem is, stock HP-UX 11.11 has no built-in rsync and I don't have permissions to install any software on the remote stock HP-UXes. How should I approach this? HP-UX provides: fbackup (HP-UX exclusive) cpio (available in RHEL5, allows backing up only the files which changed, but always grabs the totality of the file) ssh remote_user@remote_host 'find /u01/engine/logs/ -type f -name "*.log" | cpio -o | gzip -' | cpio gunzip - | -idmv Those solutions don't really answer my incremental (bandwidth efficiency) problem do they?

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

  • Remove the windows.old folder after it has already been deleted

    - by muckdog12
    When I upgraded from Windows Vista to Windows 7, I copied all of my files back over from the Windows.old folder. When I was done I deleted the Windows.old folder with the deleted key and then deleted it from my recycle bin. This was about 2 months ago. I recently found out that there was a process to removing that folder. What do I do now? Im sure the files have already been overwritten so is there anything that I should do?

    Read the article

  • Copying photos from camera stalls - how to track down issue?

    - by Hamish Downer
    When I copy files from my camera (connected via USB) to the SSD in my laptop a few files get copied and then the copy stalls. I'm not sure why, any ideas or things to investigate appreciated, or bug reports to go and look at. I have read this answer - the camera (Canon 40D in case that matters) mounts fine using gvfs. I can see the photos in Nautilus, or in the terminal (in /run/user/username/gvfs/... ) and I can copy a few photos, but not many. Using the terminal or Nautilus the process hangs until the camera goes to sleep. Digikam fails to copy any at all, as does Rapid Photo Downloader. Shotwell did manage it in the end, but that is very much a work around for me. I have disabled thumbnail generation by nautilus. Load average stays about 1 while this is happening, while CPU usage is half idle, half wait (and a little user/sys for other programs). None of the programs at the top of the cpu list in top are related to copying photos. There is not much in the logs - from /var/log/syslog Dec 2 16:20:52 mishtop dbus[945]: [system] Activating service name='org.freedesktop.UDisks' (using servicehelper) Dec 2 16:20:52 mishtop dbus[945]: [system] Successfully activated service 'org.freedesktop.UDisks' Dec 2 16:21:24 mishtop kernel: [ 2297.180130] usb 2-2: new high-speed USB device number 4 using ehci_hcd Dec 2 16:21:24 mishtop kernel: [ 2297.314272] usb 2-2: New USB device found, idVendor=04a9, idProduct=3146 Dec 2 16:21:24 mishtop kernel: [ 2297.314278] usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 Dec 2 16:21:24 mishtop kernel: [ 2297.314283] usb 2-2: Product: Canon Digital Camera Dec 2 16:21:24 mishtop kernel: [ 2297.314287] usb 2-2: Manufacturer: Canon Inc. Dec 2 16:21:24 mishtop mtp-probe: checking bus 2, device 4: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-2" Dec 2 16:21:24 mishtop mtp-probe: bus: 2, device: 4 was not an MTP device This problem has only started recently and I've had all the hardware for ages. I have also recently upgraded to 12.10, though I'm not sure if the problem started when I upgraded or after the upgrade. I also note this similar question but it is currently unanswered and I'm providing more detail

    Read the article

  • Nodejs for processing js and Nginx for handling everything else

    - by Kevin Parker
    I am having a nodejs running on port 8000 and nginx on port 80 on same server. I want Nginx to handle all the requests(image,css,etc) and forward js requests to nodejs server on port 8000. Is it possible to achieve this. i have configured nginx as reverse proxy but its forwarding every request to nodejs but i want nginx to process all except js. nginx/sites-enabled/default/ upstream nodejs { server localhost:8000; #nodejs } location / { proxy_pass http://192.168.2.21:8000; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }

    Read the article

  • Allow READ access to local folders in 2003SBS AD

    - by Dan M.
    Have a SBS2003 client with a mess of a domain that is in process of being cleaned. But, for the life of me I cannot find a setting that will allow write access to the local hard disk for domain users with redirected profiles(to the server). This is needed only for one program that will not follow a symbolic link to the network path, instead it seems to be hard coded to the %appdata% folder but only on the c: drive.... So question is how can I allow "Domain users" write access to the local %appdata% directory? I have tried setting it manually on a machine but it kept resetting to RO no matter how many times I tried. Everytime I would uncheck the RO property it would reset sometime right after i hit OK. Thanks in advance! Dan

    Read the article

  • Using EC2 instance as main development platform

    - by David
    My problem I am working as a consultant for various companies. Each company provides me with a laptop where with their software on and I also have my own where I have my development environment. I tend to buy a new laptop every second year and find myself spending lots of time configuring and installing software. I also sometimes spend a lot of time waiting for my laptop to process things. To solve all these issues, I am now considering using EC2 (running windows instances) as my main development platform and just access this from any PC I happen to be at. I calculated that running the High-CPU On-Demand Instances (medium) for 8 hours a day for a year costs me 580$, which is acceptable. I imagine that when I approach the workplace each day, I will make a single click my phone to fire up the instance, so it is ready when I get to work. I should have different icons on my phone to fire up the various instance types. The same software should of course automatically be loaded on the various hardware (sometimes I would even need their instance with 68.4 GB of memory). Another advantage is that if I am having a specific problem with my instance, I could fire up another instance and have someone look into the problem and update the image. My question: Does anyone have experience with such a setup on EC2? What kind of problems do you forsee?

    Read the article

  • Working with QuickBooks using LINQPad

    - by dataintegration
    The RSSBus ADO.NET Providers can be used from many applications and development environments. In this article, we show how to use LINQPad to connect to QuickBooks using the RSSBus ADO.NET Provider for QuickBooks. Although this example uses the QuickBooks Data Provider, the same process applies to any of our ADO.NET Providers. Create the Data Model Step 1: Download and install both the Data Provider from RSSBus and LINQPad (available at www.linqpad.net Step 2: Create a new project in Visual Studio and create a data model for it using the ADO.NET Entity Data Model wizard. Step 3: Create a new connection by clicking "New Connection", specify the connection string options, and click Next. Step 4: Select the desired tables and views and click Finish to create the data model. Step 5: Right click on the entity diagram and select 'Add Code Generation Item'. Choose the 'ADO.NET DbContext Generator'. Step 6: Now build the project. The generated files can be used to create a QuickBooks connection in LINQPad. Create the connection to QuickBooks in LINQPad Step 7:Open LINQPad and click 'Add New Connection'. Step 8: Choose 'Entity Framework DbContext POCO'. Step 9: Choose the data model assembly ('.dll') created by Visual Studio as the 'Path to Custom Assembly'. Choose the name of the custom DbContext, the path to the config file, and assign a name to the connection that will allow you to recognize its purpose. Step 10: Congratulations! Now you have a connection to QuickBooks, and you can query data through LINQPad.

    Read the article

  • A "region code" restriction for a custom created video dvd file

    - by user180820
    I want to create a video dvd ( no menus, just "plug and play" ) from a few video files. I`m doing it like this: ffmpeg -i sample-media/hellboy-2.wmv -y -target ntsc-dvd sample-media-to-mpeg/hellboy-2.vob dvdauthor -o sample-dvd -x dvdauthor-settings.xml mkisofs -dvd-video -o hellboy-2-trailers.iso sample-dvd/ where "dvdauthor-settings.xml" is: link. But when I try to play the iso file in windows it says: Windows Media Player cannot play the DVD because the disc prohibits playback in your region of the world. You must obtain a disc that is intended for your geographic region. When I open the *.IFO file with IfoEdit it says that all world regions are unabled. Can someone tell me why is this happening? ( maybe the whole process of creating the *.iso file is wrong? )

    Read the article

  • Compiz shutdown hook?

    - by ???
    I want to check 10+ local Git repositories if they have any unpushed commit, before shutdown. (I always forgot to push them, so later, the next morning I came office and back home again) I think maybe the shutdown process can check some conditions to meet, if any condition is not met, then give the user the choice to continue to shutdown or just cancel. Then, I can write something to hook the shutdown to check my Git repository to push. EDIT I have changed the title from Ubuntu shutdown hook to Compiz shutdown hook. I want to hook to the small lovely shutdown button, rather then click another ugly shortcut icon on the desktop.

    Read the article

  • Installing CentOS on remote machine.

    - by vijay.shad
    Hi, Can I do a fresh install of centos linux distribution on a remote machine? I have got a machine with windows NT system in a remote location. Now i want to use that system as my deployment machine. To proceed on my plan i need to install a linux os(have chosen CentOS). The guy with machine there is not very much aware of the installation process of any OS. So i thought of the remotely install. Please give me some insight how can i achieve this?

    Read the article

  • Applying flattened css to xhtml

    - by rahul
    Hi, I try to apply flattened css to xhtml (which get from websites). I am using css4j-0.12.1.jar (from http://www.informatica.info/projects/css/) to process the url html content. For a few urls the code seems to be hang. Even no exception is thrown. For a particular stylableelement or style code doesnot return any value and hangs. For a few websites code hang in, CSSStyleDeclaration.getPropertyValue(style); and for a few websites code hang in, CSSStylableElement.getComputedStyle(); I am using the latest jar. Could you please update me, is there any known issue or fix for this. I am not sure where to post this issue. If anyone knows please update me. Thanks in Advance.

    Read the article

  • Return http status ok (200) on request method OPTIONS Apache

    - by jazz
    I have a apache server which uses Reverse Proxy to connect/direct to a tomcat server. Using virtualHost, RequestHeader set X-Forwarded-Proto "http" ServerName image.abc.local DocumentRoot "/var/www/html" ProxyRequests Off ProxyTimeout 600 ProxyPass /abc http://image.abc.local:9001/abc ProxyPass /xyz http://image.abc.local:9001/xyz ProxyPassReverse /abc http://image.abc.local:9001/abc ProxyPassReverse /xyz http://image.abc.local:9001/xyz what i want to achieve here is that, when there is a REQUEST_METHOD OPTIONS i want simply return HTTP status OK (200). I dont want the request to be received by the tomcat server and process it. For performance based concerns i want this request to be handled at apache level. with all the research i was still unable to get this to run; RewriteEngine on RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule .* - [R=200m] can somebody assist me with what rewrite rule should be there? or is there an alternative to RewriteEngine? Thanks

    Read the article

  • Is there such thing as a "theory of system integration"?

    - by Jeff
    There is a plethora of different programs, servers, and in general technologies in use in organizations today. We, programmers, have lots of different tools at our disposal to help solve various different data, and communication challenges in an organization. Does anyone know if anyone has done an serious thinking about how systems are integrated? Let me give an example: Hypothetically, let's say I own a company that makes specialized suits a'la Iron Man. In the area of production, I have CAD tools, machining tools, payroll, project management, and asset management tools to name a few. I also have nice design space, where designers show off their designs on big displays, some touch, some traditional. Oh, and I also have one of these new fangled LEED Platinum buildings and it has number of different computer controlled systems, like smart window shutters that close when people are in the room, a HVAC system that adjusts depending on the number of people in the building, etc. What I want to know is if anyone has done any scientific work on trying to figure out how to hook all these pieces together, so that say my access control system is hooked to my payroll system, and my phone system allowing my never to swipe a time card, and to have my phone follow me throughout the building. This problem is also more than a technology challenge. Every technology implementation enables certain human behaviours, so the human must also be considered as a part of the system. Has anyone done any work in how effectively weave these components together? FYI: I am not trying to build a system. I want to know if anyone has thoroughly studied the process of doing a large integration project, how they develop their requirements, how they studied the human behaviors, etc.

    Read the article

  • Meraki's Accounting-Requests to RADIUS server

    - by PachinSV
    I'm running a RADIUS server with some Meraki APs, the process of Authentications is fine... But it seems that the Meraki Cloud Controller is just sending the authentication packets and not the accounting requests. I've tested the RADIUS sending accounting requests with the radclient tool (locally) and it worked. I think that maybe my RADIUS server is ignoring the accounting requests from the MCC because there are some Vendor Specific Attributes that my RADIUS doesn't know. should I add a Meraki's dictionary to my RADIUS configurations? I'm kind of desperate, any idea?

    Read the article

  • CentOS and Broadcom Wireless Drivers

    - by Sami
    Hi, I'm tired of reading, tired of trying to find a solution myself, that's why I decided to post my question. I'm following tutorial located at CentOS Wiki to install driver for my wifi device. However, I'm facing strange error at the begining of the process. make -C /lib/modules/`uname -r`/build/ M=`pwd` make: *** /lib/modules/2.6.18-194.11.3.el5PAE/build/: File not found Does anyone know what I'm doing wrong? This is first time when I try to install Linux on my laptop.

    Read the article

  • JavaOne + Develop Registration is Open!

    - by justin.kestelyn
    Welcome to "The Zone". Here's what the new JavaOne + Develop registration Website says: The world's most important developer conferences are creating the world's coolest neighborhood for the developer community. Having been intimately involved in the planning process, I can vouch for that statement. Remember, if either co-located conference - JavaOne or Oracle Develop - are the confines of your interest, you can experience either one in standalone mode, if you like (although there are some areas of common interest, of course). Or, considering that a single Full Conference Pass gives you access to both of them, you can partake in any measure that you like. It's up to you. Either way, you will get access not only to session content and keynotes, but also to the massive OTN Night party on Monday night, to open unconference sessions, and to the legendary Appreciate Night concert (acts TBD) on Wednesday. Furthermore, as is customary, the Oracle Technology Network team will offer a full slate of community-focused activities and goodies while the conferences are running - more details on those as we have them. A GOOD time is ensured for all; I look forward to seeing you there!

    Read the article

  • Undeliverable messages to newly migrated Exchange user

    - by johnnyb10
    We are in the process of migrating from our old domain to a new one, part of which involves migrating mailboxes from Exchange 2003 to Exchange 2007. A bunch of users have been migrated already without problems. However, one of the users is having trouble receiving emails from others. When someone sends to him, they get an Undeliverable NDR that says "A configuration error in the e-mail system caused the message to bounce between two servers or to be forwarded between two recipients." The message shows the user's distinguished name as /OU=OurDomain/CN=Recipients/CN=USER57137172. The user's account name should just be "USER", so I don't know where the extra numbers ("57137172") are coming from. Thanks in advance.

    Read the article

  • How to manage drawing loop when changing render targets

    - by George Duckett
    I'm managing my game state by having a base GameScreen class with a Draw method. I then have (basically) a stack of GameScreens that I render. I render the bottom one first, as screens above might not completely cover the ones below. I now have a problem where one GameScreen changes render targets while doing its rendering. Anything the previous screens have drawn to the backbuffer is lost (as XNA emulates what happens on the xbox). I don't want to just set the backbuffer to preserve its contents as I want this to work on the xbox as well as PC. How should I manage this problem? A few ideas I've had: Render every GameScreen to its own render target, then render them all to the backbuffer. Create some kind of RenderAction queue where a game screen (and anything else I guess) could queue something to be rendered to the back buffer. They'd render whatever they wanted to any render target as normal, but if they wanted to render to the backbuffer they'd stick that in a queue which would get processed once all rendertarget rendering was done. Abstract away from render targets and backbuffers and have some way of representing the way graphics flows and transforms between render targets and have something manage/work out the correct rendering order (and render targets) given what rendering process needs as input and what it produces as output. I think each of my ideas have pros and cons and there are probably several other ways of approaching this general problem so I'm interested in finding out what solutions are out there.

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >