Search Results

Search found 18311 results on 733 pages for 'input director'.

Page 247/733 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • iptables dos limit for all ports

    - by user973917
    I know how to use limit conntrack option to allow for DoS protection. However, I want to add a protection to limit no more than say 50 connections for each port. How can I do this? Basically, I want to make sure that each port can have no more than 50 connections, rather than globally applying 50 connections (which is what #2 does I believe?) Would I do something like: iptables -A INPUT --dport 1:65535 -m limit --limit 50/minute --limit-burst 50 -j ACCEPT or iptables -A INPUT -m limit --limit 50/minute --limit-burst 50 -j ACCEPT

    Read the article

  • Bluetooth Issues Ubuntu 13.10

    - by Eduardo
    I have a bluetooth headset which works perfectly on Ubuntu 13.04. Now I update to 13.10, and here is what's happing: After installing blueman, bluetooth-suport, pulseaudio-module-bluetooth and so on, I can find my device, pair it and connect to the headset service. But the device does not appear on the Sound Settings, so I just can't select it as input/output device. In other words, it's connected but "useless". So, searching around for solutions, I found a software called stream2ip. With this I can connect the device and it appears on the Sound Settings, the sound plays on the device as well, but I microphone does not work, even when selected on the settings, also the A2DP option still not working. Stream2ip isn't a solution at all, I mean everything was working without it in the previous Ubuntu version. Maybe I'm missing something, and I hope someone could give me any hint. And formally, the question: How can I get the A2DP output option and the input working again, on the Ubuntu 13.10? Thanks!

    Read the article

  • Convert swf file to mp4 file using FFMPEG

    - by user1624004
    I now want to show an html5 video on a html page. Now I have an sample.swf file, I want to convert it to .mp4 or .ogg or .webm file. I have tried: ffmpeg -i sample.swf sample.mp4 But I got this error: [swf @ 0000000001feef40] Could not find codec parameters for stream 0 (Audio: pcm_s16le, 5512 Hz, 1 channels, 88 kb/s): unspecified sample format Consider increasing the value for the 'analyzeduration' and 'probesize' options [swf @ 0000000001feef40] Estimating duration from bitrate, this may be inaccurate Guessed Channel Layout for Input Stream #0.0 : mono Input #0, swf, from 'sample.swf': Duration: N/A, bitrate: N/A Stream #0:0: Audio: pcm_s16le, 5512 Hz, mono, 88 kb/s Stream #0:1: Video: mjpeg, yuvj444p, 1024x768 [SAR 100:100 DAR 4:3], 16 fps, 16 tbr, 16 tbn File 'sample.mp4' already exists. Overwrite ? [y/N] y Invalid sample format '(null)' Error opening filters!

    Read the article

  • "VLC could not read the file" error when trying to play DVDs

    - by stephenmurdoch
    I can watch most DVD's on my machine using VLC but today, I went to watch Thor, and it won't play. libdvdread4 and libdvdcss2 are at the latest versions. vlc -v returns 1.1.4 w32codecs are installed and reinstalled ubuntu-restricted-extras are same as above My machine recognises the disc and I can open the folder and browse the assorted .vob files, of which there are many. None of them will open in VLC, or in MPlayer etc. When I run vlc -vvv /media/THOR/VIDEO_TS/VTS_03_1.VOB I get: File Reading Failed VLC could not read the file I also see command line output like this: [0x963f47c] main filter debug: removing module "swscale" [0x963a4b4] main generic debug: A filter to adapt decoder to display is needed [0x964be84] main filter debug: looking for video filter2 module: 18 candidates [0x964be84] swscale filter debug: 720x576 chroma: I420 -> 979x551 chroma: RV32 with scaling using Bicubic (good quality) [0x964be84] main filter debug: using video filter2 module "swscale" ..... [0x959f4e4] main video output warning: late picture skipped (-10038 > -15327) [0x963a4b4] main generic debug: auto hidding mouse [0x93ca094] main input warning: clock gap, unexpected stream discontinuity [0x93ca094] main input warning: feeding synchro with a new reference point trying to recover from clock gap [0x959f4e4] main video output warning: early picture skipped ...... ac-tex damaged at 0 12 ac-tex damaged at 6 20 ac-tex damaged at 12 28 This happens with onboard and Known Good USB DVD player I don't have standalone DVD player to try with TV I am going to watch another film instead for now, because I can do that. I just can't watch THOR, and I'm pretty confident that the disc is ok. It is a rental, but it's clean and there are no surface abrasions. I even cleaned it with Christian Dior aftershave to make sure.

    Read the article

  • Getting error when using Oracle imp command to import dmp file

    - by blizz
    I am importing a dmp file using the following command: imp user/pass file=file.dmp log=logfile.log full=y ignore=y destroy=y I get the following errors: . . importing table "MESSAGE_BOARD" IMP-00058: ORACLE error 22993 encountered ORA-22993: specified input amount is greater than actual source amount IMP-00028: partial import of previous table rolled back: 219638 rows rolled back . . importing table "MESSAGE_BOARD_ARCHIVES" IMP-00058: ORACLE error 22993 encountered ORA-22993: specified input amount is greater than actual source amount IMP-00028: partial import of previous table rolled back: 2477960 rows rolled back I have tried increasing the tablespace size and adding a second datafile to no avail. I'm a total newb at Oracle so any help will be appreciated. Googled this for hours and still can't come up with a solution. Thanks!

    Read the article

  • iCloud stuff stops working while connected to OpenVPN [closed]

    - by Taco Bob
    I have a fairly simple OpenVPN setup on an OpenVZ VPS with Ubuntu 11.10. Client is the Viscosity client on Mac OS X 10.8.2, and after some testing, we can rule out the client as being part of the problem. Everything has been working fine except for Apple's iCloud stuff. Web surfing, email, FTP, NNTP, and Skype are all working as expected. It's ONLY the iCloud services that cease to function. If I connect to the VPN, iCloud stuff stops working. I no longer get anything in Messages, Calendar items don't get updated, and Notifications stop working. If I disconnect, the iCloud stuff all starts working. Connect again, iCloud stops working. Here's the server.conf: status openvpn-status.log log /var/log/openvpn.log verb 4 port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh1024.pem server 10.9.8.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1" push “dhcp-option DNS 10.9.8.1? keepalive 10 120 duplicate-cn cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun tun-mtu 1500 mssfix 1400 I'm using iptables in a script, and it's also fairly simplistic. iptables -F iptables -t nat -F iptables -t mangle -F iptables -A FORWARD -i tun0 -o venet0 -j ACCEPT iptables -A FORWARD -i venet0 -o tun0 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -p udp --dport 1194 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source <server's public ip> echo 1 > /proc/sys/net/ipv4/ip_forward I tried forwarding ports as well, with no success. iptables -A FORWARD -p tcp -d 10.9.8.0/24 --dport 5222:5230 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 5222:5230 -j DNAT --to-destination 10.9.8.6 I am also sometimes behind a double-NAT situation that I have no control over. Client -> work VPN -> my OpenVPN box -> Internet. Client -> Airport Express -> ISP (which is doing NAT) -> my OpenVPN box -> Internet. Those two situations are just the fact of life where I am, and I cannot change them. I do have full control over my client and the OpenVPN server. I am completely out of ideas. I have posted a similar query at the OpenVPN forums, but it hasn't posted yet and seems to be in their moderation queue still. Tried on freenode irc channels, but nobody is awake, so here I am. I have Googled extensively for this, and can find nothing that is related. Help me get iCloud stuff working again!

    Read the article

  • Setting the Timezone with an automated script

    - by Tom
    I'm writing scripts to automate setting up new slicehost installations. In a perfect world, after I started the script, it would just run, with no attention from me. I have succeeded, with one exception. How do I set the timezone, in a permanent (survive reboot) and sane (adjust for standard and daylight savings time, so no just forcing the date) ... manner that doesn't require input from me? Currently, I'm using dpkg-reconfigure tzdata This doesn't seem to have any way to force parameters into it. It demands user input. EDIT: I'm editing here, rather than commenting, since comments don't seem to allow code blocks. Here's the actual code I ended up with, based on Rudedog's comment below. I also noticed that this doesn't update /etc/timezone. I'm not certain who uses that, but in case anybody does, I'm setting that too. TIMEZONE="America/Los_Angeles" echo $TIMEZONE > /etc/timezone cp /usr/share/zoneinfo/${TIMEZONE} /etc/localtime # This sets the time

    Read the article

  • Mini DisplayPort -> DVI -> VGA ?

    - by ibz
    I have a Mini DisplayPort to DVI adapter which I use to connect my MacBook to a screen that has DVI input. I also have a screen with VGA only input which I want to be able to use, so instead of buying yet another expensive Apple adapter (the Mini DisplayPort to VGA), I just got a cheap DVI to VGA, so I can do Mini DisplayPort - DVI - VGA. It doesn't seem to work though. The screen just says "no connection". Does anyone know if this is actually supposed to work (and my DVI - VGA is just broken), or is this simply not supported and I need to get the expensive Apple Mini DisplayPort to VGA? Thanks!

    Read the article

  • passing data to php in ajax [closed]

    - by MertMETIN
    i try to pass my data, actually checkboxes has some datas in their ids, and i read try to read them. like that, <input align="right" type=checkbox name="checkArtist[]" class="checkClass" id ="{$movieId}-{$mCast.id}"></li> dont worry about {$movieId}-{$mCast.id} They are template lite tags. I successfully read checkboxes datas in ajax side. The problem is that i cannot send these datas to my php. var artistIds = new Array(); $(".p16 input:checked").each()(function(){ artistIds.push($(this).attr('id')); }); $.post('/json/crewonly/deleteDataAjax2', { 'artistIds': artistIds },function(response){ if(response=='ok') alert("ok"); }); above code is my ajax but in php side $artistIds is always empty. I found a topic in stackoverflow http://stackoverflow.com/questions/5571646/how-to-pass-a-javascript-array-via-jquery-post-so-that-all-its-contents-are-acce It says how to pass js arrays $.post('/url/to/page', {'someKeyName': variableName}); This is same with my code. What's going wrong ?

    Read the article

  • To display the field values submitted with AJAX [closed]

    - by work
    Here is the code:I want to post the field values entered in this code to the page ajaxpost.php using Ajax and then do some operations there. What would be code required to be written in ajaxpost.php <html> <head> <script type="text/javascript"> function loadXMLDoc() { var xmlhttp; if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById("myDiv").innerHTML=xmlhttp.responseText; } } var zz=document.f1.dd.value; //alert(zz); var qq= document.f1.cc.value; xmlhttp.open("POST","ajaxpost.php",true); xmlhttp.setRequestHeader("Content-type","application/x-www-form-urlencoded"); xmlhttp.send("dd=zz&cc=qq"); } </script> </head> <body> <h2>AJAX</h2> <form name="f1"> <input type="text" name="dd"> <input type="text" name="cc"> <button type="button" onclick="loadXMLDoc()">Request data</button> <div id="myDiv"></div> </form> </body> </html>

    Read the article

  • How Do I make an Acer T230H Touchcreen work on Ubuntu 9.10?

    - by N Rahl
    I've done this so far: sudo nano /etc/udev/rules.d/99-touchscreen.rules And added: SUBSYSTEM=="usb", ATTRS{idVendor}=="0408", ATTRS{idProduct}=="3000", SYMLINK+="usb/quanta_touch" SUBSYSTEM=="input", KERNEL=="event*", ATTRS{idVendor}=="0408", ATTRS{idProduct}=="3000", SYMLINK+="input/quanta_touch" sudo service udev restart then the instructions here: http://ubuntuforums.org/showpost.php?p=8932808&postcount=36 And then added to my xorg conf: Section "InputDevice" Identifier "Acer T230H" Driver "hidtouch" Option "SendCoreEvents" "true" Option "ReportingMode" "Raw" Option "Device" "/dev/usb/quanta_touch" Option "PacketCount" "13" Option "OpcodePressure" "852034" Option "OpcodeX" "65584" Option "OpcodeY" "65585" Option "CalibrationModel" "1" Option "CornerTopLeftX" "0" Option "CornerTopLeftY" "0" Option "CornerTopRightX" "1920" # 1920 for 23" Option "CornerTopRightY" "0" Option "CornerBottomLeftX" "0" Option "CornerBottomLeftY" "1080" # 1080 for 23" Option "CornerBottomRightX" "1920" # 1920 for 23" Option "CornerBottomRightY" "1080" # 1080 for 23" Option "CornerScreenWidth" "1920" # 1920 for 23" Option "CornerScreenHeight" "1080" # 1080 for 23" EndSection Section "ServerLayout" Identifier "Touchscreen" InputDevice "Acer T230H" "SendCoreEvents" EndSection And restarted. And the touchscreen does nothing. Any ideas?

    Read the article

  • Game actions that take multiple frames to complete

    - by CantTetris
    I've never really done much game programming before, pretty straightforward question. Imagine I'm building a Tetris game, with the main loop looking something like this. for every frame handle input if it's time to make the current block move down a row if we can move the block move the block else remove all complete rows move rows down so there are no gaps if we can spawn a new block spawn a new current block else game over Everything in the game so far happens instantly - things are spawned instantly, rows are removed instantly etc. But what if I don't want things to happen instantly (i.e animate things)? for every frame handle input if it's time to make the current block move down a row if we can move the block move the block else ?? animate complete rows disappearing (somehow, wait over multiple frames until the animation is done) ?? animate rows moving downwards (and again, wait over multiple frames) if we can spawn a new block spawn a new current block else game over In my Pong clone this wasn't an issue, as every frame I was just moving the ball and checking for collisions. How can I wrap my head around this issue? Surely most games involves some action that takes more than a frame, and other things halt until the action is done.

    Read the article

  • Convert MP3 to AAC,FLAC to AAC (.NET/C#) FREE :)

    - by PearlFactory
    So I was tasked with looking at converting 10 million tracks from mp3 320k to AAC and also Converting from mp3 320k to mp3 128k After a bit of hunting around the tool you need to use is FFMPEG Download x64 WindowsAlso for the best results get the Nero AACEncoder Download Now the command line STEP 1(From Flac)ffmpeg -i input.flac -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aor (From mp3)ffmpeg -i input.mp3 -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aNow the output.m4a is a intermediate state that we now put a ACC wrapper on via FFMpeg STEP 2ffmpeg -i output.m4a -vn -acodec copy final.aacDone :) There are a couple of options with the FFMPEG library as in we can look at importing the librarys and manipulation the API for the direct result FFMPEG has this support. You can get the relevant librarys from HereThey even have the source if you are that keen :-)In this case I am going to wrap the command lines into c# external process threads.( For the app that i am building to convert the 10 million tracks there is a complex multithreaded app to support this novel code )//Arrange Metadata about Call Process myProcess = new Process();ProcessStartInfo p = new ProcessStartInfo();string sArgs = string.format(" -i {0} -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of {1}",inputfile,outputfil) ; p.FileName = "ffmpeg.exe" ; p.CreateNoWindow = true; p.RedirectStandardOutput = true; //p.WindowStyle = ProcessWindowStyle.Normal p.UseShellExecute = false;//Execute p.Arguments = sArgs; myProcess.StartInfo = p; myProcess.Start(); myProcess.WaitForExit();//Write details about call  myProcess.StandardOutput.ReadToEnd();Now in this case we would execute a 2nd call using the same code but with different sArgs to put the AAC wrapper on the m4a file. Thats it. So if you need to do some conversions of any kind for you ASP.net sites/apps this is a great start and super fast.. With conversion times of around 2-3 seconds all of this can be done on the fly:-)Justin Oehlmannref : StackOverflow.com

    Read the article

  • PowerShell filter vs. function

    - by Marcel Janus
    I'm reading currently the Windows PowerShell 3.0 Step by Step book to get some more insights to PowerShell. On page 201 the author demonstrates that a filter is faster than the function with the same functionally. This script takes 2.6 seconds on his computer: MeasureaddOneFilter.ps1 Filter AddOne { "add one filter" $_ + 1 } and this one 4.6 seconds MeasureaddOneFunction.ps1 Function AddOne { "Add One Function" While ($input.moveNext()) { $input.current + 1 } } If I run this code is get the exact opposite of his result: .\MeasureAddOneFilter.ps1 Days : 0 Hours : 0 Minutes : 0 Seconds : 0 Milliseconds : 226 Ticks : 2266171 TotalDays : 2,62288310185185E-06 TotalHours : 6,29491944444444E-05 TotalMinutes : 0,00377695166666667 TotalSeconds : 0,2266171 TotalMilliseconds : 226,6171 .\MeasureAddOneFunction.ps1 Days : 0 Hours : 0 Minutes : 0 Seconds : 0 Milliseconds : 93 Ticks : 933649 TotalDays : 1,08061226851852E-06 TotalHours : 2,59346944444444E-05 TotalMinutes : 0,00155608166666667 TotalSeconds : 0,0933649 TotalMilliseconds : 93,3649 Can someone explain this to me?

    Read the article

  • Bluetooth headset A2DP works, HSP/HFP not (no sound/no mic)

    - by Stefan Armbruster
    My Philips SBH9001 headset pairs fine using Ubuntu 12.04. In the audio settings it's properly detected as A2DP device and as HSP/HFP device. Hardware: Thinkpad X230, Ubuntu 12.04 64bit, Kernel 3.6.0-030600rc3-generic (build from Ubuntu mainline repo), Bluetooth device is USB-Id 0a5c:21e6 from Broadcom, Headset is a Philips SBH9001. Note: Kernel 3.6 rc3 is used because of a fix for audio on the dockingstation that is not in any previous branches. Playing audio in A2DP works just fine out of the box, but when switching the headset to HSP/HSP mode there is no sound nor does the microphone work. When connecting the headset, /var/log/syslog shows: Aug 25 21:32:47 x230 bluetoothd[735]: Badly formated or unrecognized command: AT+CSRSF=1,1,1,1,1,7 Aug 25 21:32:49 x230 rtkit-daemon[1879]: Successfully made thread 17091 of process 14713 (n/a) owned by '1000' RT at priority 5. Aug 25 21:32:49 x230 rtkit-daemon[1879]: Supervising 4 threads of 1 processes of 1 users. Aug 25 21:32:50 x230 kernel: [ 4860.627585] input: 00:1E:7C:01:73:E1 as /devices/virtual/input/input17 When switching from A2DP (standard profile) to HSP/HFP: Aug 25 21:34:36 x230 bluetoothd[735]: /org/bluez/735/hci0/dev_00_1E_7C_01_73_E1/fd3: fd(34) ready Aug 25 21:34:36 x230 rtkit-daemon[1879]: Successfully made thread 17309 of process 14713 (n/a) owned by '1000' RT at priority 5. Aug 25 21:34:36 x230 rtkit-daemon[1879]: Supervising 4 threads of 1 processes of 1 users. Aug 25 21:34:41 x230 bluetoothd[735]: Audio connection got disconnected Any hints how to get HSP/HFP working here?

    Read the article

  • How can I improve my error checking and handling?

    - by Google
    Lately I have been struggling to understand what the right amount of checking is and what the proper methods are. I have a few questions regarding this: What is the proper way to check for errors (bad input, bad states, etc)? Is it better to explicitly check for errors, or use functions like asserts which can be optimized out of your final code? I feel like explicitly checking clutters a program with a lot of extra code which shouldn't be executed in most situations anyway-- and not to mention most errors end up with an abort/exit failure. Why clutter a function with explicit checks just to abort? I have looked for asserts versus explicit checking of errors and found little to truly explain when to do either. Most say 'use asserts to check for logic errors and use explicit checks to check for other failures.' This doesn't seem to get us very far though. Would we say this is feasible: Malloc returning null, check explictly API user inserting odd input for functions, use asserts Would this make me any better at error checking? What else can I do? I really want to improve and write better, 'professional' code.

    Read the article

  • Is this possible?

    - by PythonNewbie2
    Hello, I'm exploring some technologies and JSP with JSF 2.0 and Primefaces seems really cool. I'm new to all of these, but I'm a fast learner. I wondering if I can create the web app I want withh JSP/JSF/Primefaces or should I be looking to different technologies? If I should, which ones do you recommend? Here's a basic description of the app: Users log in with their username and password (maybe I can somehow incorporate google OPENID)? With a really nice UI, they will be presented a large list of questions specific to a certain category, for example, JSP. When they click on any of these questions, a little input opens up below it to allow the user to put in a link. If the link they enter has the same question on that webpage the URL points to, they will be awarded one point. This question then disappears and gets added to a different page that has a list of all correctly linked questions. On the right side of the screen, there will be a leaderboard with the usernames of the people with the top ten points. Is this possible with JSP/JSF/Primefaces, or should I be looking elsewhere for a different web technology? The idea is relatively simple - to be able to compile links to external websites for specific questions. I know I can build the UI easily with Primefaces. What I'm not sure is if JSP/JSF gives the ability to parse HTML at a certain URL to see if it contains words. I can do this with python easily by using urllib. Any help would be appreciated!!! What would be more helpful than a "Yes" or "No" answer would be links to where I can see sample code of external HTML parsing. Your input is truly appreciated! Thanks!

    Read the article

  • Unexpected issues with SessionPageStatePersiste

    - by geekrutherford
    Several iterations ago I implemented the SessionPageStatePersister in an application as a way to cut down on the size of the hidden ViewState input on aspx pages.   At first it seemed utterly fantasic. The size of the ViewState appeared to be drastically reduced and the application did not appear to peform any slower than baseline.   Enter the iFrame &amp; user control. I added a user control which pings the web server every 20 seconds in order to show updated application information to the user (new messages, reports, etc.) After releasing this nifty little control into the QA environment I quickly began receiving emails from testers about "post back" related error messages which mostly centered around invalid ViewState exceptions.   At first I dismissed it as something related to all of the AJAX requests happening on the page and considered turning off page event validation. However, upon further investigation I came across the following article:   Things That You Should Watch Out For When Using SessionPageStatePersister   In this article the author specifically states:   If you application uses frames than each frame request will create a new session view state item and as before it will remove items when reaching the maximum, you come into a situation that one of the frames will probably loose it session view state because other frames did post backs.   Oh snap! That is precisely what I am doing. That combined with multiple users on the application equals dropped ViewStates!   The temporary fix has been to disable the use of the SessionPageStatePersister in my application. This results in a bloated hidden ViewState input, but the web server is no longer tasked with maintaing/retreiving it and the app. no longer loses ViewState information.

    Read the article

  • Platformer Enemy AI

    - by hayer
    I'm currently developing a platformer shooter. The game is multiplayer and while my net code could use some real work I have put that off for the time, so currently I'm trying to implement the AI. The game is pretty simple; Players run around on a map filled with a X amount of zombies that try to eat their brains, classic and overused I know. Weapons spawn at random intervals around the map. The problem is that the zombies, when they find their pray the have to follow it for some while.. And here is the problem, running the AI navcode seems to take for ever. So here is the ideas I have come up with so far Have the AI update at different intervals with a maximum of Y ms with no updates. Have the zombies assigned to groups of zombies. One is appointed the leader of the group who finds the way to the player - the rest just follows the leader. If the leader dies another one of the zombies in the group is appointed president of the zombie swarm. If there is less than five zombies in a group they try to meet up with other zombies.(Aka they are assigned to a different group and therefor a new leader) Multi-threading option one or two? For navigation I have some kinda navmesh(since the game is not tile-based) that tells the zombies where they can walk etc. If anyone else got some ideas on how to do navigation I would love some input. For LoS(zombie - player) I have split the map into grids. If the players grid is connected to the zombies grid(if I go with option two I would only need to check if leader zombies grid is connected to player, aka less checks) - if they are connected and there is more than 250ms since last check do a raytrace.. This is my first time programming AI so input on any field is appreciated.

    Read the article

  • How to replace emacs keybindings with vim keybindings in OS X GUI-level text fields

    - by post meridiem
    I'm fairly fluent with VIM, but find myself having to use GUI programs (in OS X) and their awkward editing modes more and more frequently for my work. I know that OS X lets you use basic emacs keybindings in most textfields (browser window/bar, etc.). I'm wondering if it's possible to switch the emacs keybindings to vim keybindings for those GUI-level input areas. I understand that it might be possible to do that key-by-key in the keyboard layout preference pane. But that approach seems limited, cumbersome, and not very elegant. I'm thinking--and I may well be wrong here--that since OS X already ships with VIM installed, there should be a way to change a preference file deep in the system that maps VIM instead of emacs keybindings to the GUI text/input areas. Does anyone know if this is a) theoretically possible, or if there's something about how OS X maps emacs keybindings to its GUI interface that would make this impossible; and b) how/where that could be done?

    Read the article

  • Is mocking for unit testing appropriate in this scenario?

    - by Vinoth Kumar
    I have written around 20 methods in Java and all of them call some web services. None of these web services are available yet. To carry on with the server side coding, I hard-coded the results that the web-service is expected to give. Can we unit test these methods? As far as I know, unit testing is mocking the input values and see how the program responds. Are mocking both input and ouput values meaningful? Edit : The answers here suggest I should be writing unit test cases. Now, how can I write it without modifying the existing code ? Consider the following sample code (hypothetical code) : public int getAge() { Service s = locate("ageservice"); // line 1 int age = s.execute(empId); // line 2 return age; // line 3 } Now How do we mock the output ? Right now , I am commenting out 'line 1' and replacing line 2 with int age= 50. Is this right ? Can anyone point me to the right way of doing it ?

    Read the article

  • Help to organize game cycle in Java

    - by ASIO22
    I'm pretty new here (as though to a game development). So here's my question. I'm trying to organize a really simple game cycle in my public static main() as follows: Scanner sc = new Scanner(System.in); //Running the game cycle boolean flag=true; while (flag) { int action; System.out.println("Type your action please:"); System.out.println("0: Exit app"); try { action = sc.nextInt(); switch (action) { case 0: flag=false; break; case 1: break; } } catch (InputMismatchException ex) { System.out.println(ex.getClass() + "\n" + "Please type a correct input\n"); //action = sc.nextInt(); continue; } What's wrong with this cycle: I want to catch an exception when user types text instead of number, show a message, warning user, and the continue game cycle, read user input etc. But instead of that, when users types wrong data, it goes into a eternal cycle without even prompting user. What I did wrong?

    Read the article

  • Passenger 2.2.4, nginx 0.7.61 and SSL

    - by boompa
    Has anyone had any luck configuring Passenger and nginx with SSL? I've spent hours trying to get this configuration working as I'd like, using what few resources there are out there on the net, and I can't get any of the supposedly forwarded headers to show up in the Rails controller. For example, with a conf file of the following (and multiple variations thereof): server { listen 3000; server_name .example.com; root /Users/website/public; passenger_enabled on; rails_env development; } server { listen 3443; root /Users/website/public; rails_env development; passenger_enabled on; ssl on; #ssl_verify_client on; ssl_certificate /Users/website/ssl/server.crt; ssl_certificate_key /Users/website/ssl/server.key; #ssl_client_certificate /Users/website/ssl/CA.crt; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-SSL-Subject $ssl_client_s_dn; #proxy_set_header X-SSL-Issuer $ssl_client_i_dn; proxy_redirect off; proxy_max_temp_file_size 0; } and Rails code in the controller like this: request.headers.each { |k, v| RAILS_DEFAULT_LOGGER.error "Header #{k} Val #{v}" } other headers appear, but not those set in nginx, e.g.: Header rack.multithread Val false Header REQUEST_URI Val /login/new Header REMOTE_PORT Val 64021 Header rack.multiprocess Val true Header PASSENGER_USE_GLOBAL_QUEUE Val false Header PASSENGER_APP_TYPE Val rails Header SCGI Val 1 Header SERVER_PORT Val 3443 Header HTTP_ACCEPT_CHARSET Val ISO-8859-1,utf-8;q=0.7,*;q=0.7 Header rack.request.query_hash Val Header DOCUMENT_ROOT Val /Users/website/public I've even gone so far as to modify Passenger's abstract_request_handler's main_loop method, i.e., headers, input = parse_request(client) if headers if headers[REQUEST_METHOD] == PING process_ping(headers, input, client) else headers.each { |h,v| log.unknown "abstract_request_handler: #{h} = #{v}" } process_request(headers, input, client) end end only to find that the supposedly added headers do not exist there either: abstract_request_handler: HTTP_KEEP_ALIVE = 300 abstract_request_handler: HTTP_USER_AGENT = Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 abstract_request_handler: PASSENGER_SPAWN_METHOD = smart-lv2 abstract_request_handler: CONTENT_LENGTH = 0 abstract_request_handler: HTTP_IF_NONE_MATCH = "b6e8b9afbc1110ee3bf0c87e119252ad" abstract_request_handler: HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5 abstract_request_handler: SERVER_PROTOCOL = HTTP/1.1 abstract_request_handler: HTTPS = on abstract_request_handler: REMOTE_ADDR = 127.0.0.1 abstract_request_handler: SERVER_SOFTWARE = nginx/0.7.61 abstract_request_handler: SERVER_ADDR = 127.0.0.1 abstract_request_handler: SCRIPT_NAME = abstract_request_handler: PASSENGER_ENVIRONMENT = development abstract_request_handler: REMOTE_PORT = 64021 abstract_request_handler: REQUEST_URI = /login/new abstract_request_handler: HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.7 abstract_request_handler: SERVER_PORT = 3443 abstract_request_handler: SCGI = 1 abstract_request_handler: PASSENGER_APP_TYPE = rails abstract_request_handler: PASSENGER_USE_GLOBAL_QUEUE = false I'm tired of banging my head against the wall, so I'd truly appreciate any help I can get!

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Did 12.04 just add multi-touch gesture support mid-release?

    - by adempewolff
    I was reviewing the updates I was about to download today and I noticed that a lot of them had to do with gesture support, noticed that many of these were new installs rather than upgrades. Has 12.04 just added multi-touch gesture support mid-release? If so, what are the capabilities that this adds? Which applications already support these capabilities and can I expect others to add support in the near future? Here are the packages that were installed: Install: libframe6:amd64 (2.2.4-0ubuntu0.12.04.1), libgeis1:amd64 (2.2.9.2-0ubuntu1), libgrail5:amd64 (3.0.6-0ubuntu0.12.04.01, automatic) And here are those that were upgraded (also including many with touch support): Upgrade: libgrip0:amd64 (0.3.4-0ubuntu2~ubuntu12.04.1, 0.3.5-0ubuntu1~12.04.1), eog:amd64 (3.4.2-0ubuntu1, 3.4.2-0ubuntu1.1), ginn:amd64 (0.2.4-0ubuntu1, 0.2.4.1-0ubuntu1) Of which the descriptions for the new installs are, libgeis1: Gesture engine interface support A common API for clients of a systemwide gesture recognition and propagation engine. libframe6: Touch Frame Library This library handles the buildup and synchronization of a set of simultaneous touches. The library is input agnostic, with bindings for mtdev, frame and XI2.1. libgrail5: Gesture Recognition And Instantiation Library This library consists of an interface and tools for handling gesture recognition and gesture instantiation. Applications can use the grail callbacks to receive gesture primitives and raw input events from the underlying kernel device. And the descriptions for the upgraded packages are, ligrip0: provides multitouch gestures to GTK+ apps Libgrip hooks gesture recognition into GTK+ applications. ginn: Gesture Injector: No-GEIS, No-Toolkits A daemon with jinn-like wish-granting capabilities: it gives applications the ability to support a subset of multi-touch gestures without having to integrate GEIS or multi-touch GTK/Qt libs. Adding in a ton of new libraries and upgrading the existing components makes me wonder if 12.04 is meant to start natively supporting gestures other than two finger scroll in the near future. I expected these capabilities to be introduced soon but I thought that they would only be rolled out in a new release, not as upgrades for an existing release. Anyone have any info about this?

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >