Search Results

Search found 1114 results on 45 pages for 'flush'.

Page 23/45 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • using pf for packet filtering and ipfw's dummynet for bandwidth limiting at the same time

    - by krdx
    I would like to ask if it's fine to use pf for all packet filtering (including using altq for traffic shaping) and ipfw's dummynet for bandwidth limiting certain IPs or subnets at the same time. I am using FreeBSD 10 and I couldn't find a definitive answer to this. Googling returns such results as: It works It doesn't work Might work but it's not stable and not recommended It can work as long as you load the kernel modules in the right order It used to work but with recent FreeBSD versions it doesn't You can make it work provided you use a patch from pfsense Then there's a mention that this patch might had been merged back to FreeBSD, but I can't find it. One certain thing is that pfsense uses both firewalls simultaneously so the question is, is it possible with stock FreeBSD 10 (and where to obtain the patch if it's still necessary). For reference here's a sample of what I have for now and how I load things /etc/rc.conf ifconfig_vtnet0="inet 80.224.45.100 netmask 255.255.255.0 -rxcsum -txcsum" ifconfig_vtnet1="inet 10.20.20.1 netmask 255.255.255.0 -rxcsum -txcsum" defaultrouter="80.224.45.1" gateway_enable="YES" firewall_enable="YES" firewall_script="/etc/ipfw.rules" pf_enable="YES" pf_rules="/etc/pf.conf" /etc/pf.conf WAN1="vtnet0" LAN1="vtnet1" set skip on lo0 set block-policy return scrub on $WAN1 all fragment reassemble scrub on $LAN1 all fragment reassemble altq on $WAN1 hfsc bandwidth 30Mb queue { q_ssh, q_default } queue q_ssh bandwidth 10% priority 2 hfsc (upperlimit 99%) queue q_default bandwidth 90% priority 1 hfsc (default upperlimit 99%) nat on $WAN1 from $LAN1:network to any -> ($WAN1) block in all block out all antispoof quick for $WAN1 antispoof quick for $LAN1 pass in on $WAN1 inet proto icmp from any to $WAN1 keep state pass in on $WAN1 proto tcp from any to $WAN1 port www pass in on $WAN1 proto tcp from any to $WAN1 port ssh pass out quick on $WAN1 proto tcp from $WAN1 to any port ssh queue q_ssh keep state pass out on $WAN1 keep state pass in on $LAN1 from $LAN1:network to any keep state /etc/ipfw.rules ipfw -q -f flush ipfw -q add 65534 allow all from any to any ipfw -q pipe 1 config bw 2048KBit/s ipfw -q pipe 2 config bw 2048KBit/s ipfw -q add pipe 1 ip from any to 10.20.20.4 via vtnet1 out ipfw -q add pipe 2 ip from 10.20.20.4 to any via vtnet1 in

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • OSX: Mimic Ubuntu IP Masquerading via iptables with ipfw

    - by Dogbert
    Good day, I am attempting to replicate a setup I have between a router and an Ubuntu PC, and have the same setup working on my MacBook (10.6, Snow Leopard). First, I have a router that has a USB port. When I plug it into my Ubuntu PC, it creates an RNDIS connection, allowing me to connect to the router over the USB cable via an IP connection. When I plug it into my computer via USB, it gets assigned an IP address of 172.16.84.1, and a new adapter appears when I type ifconfig. I can then SSH into the device via ssh [email protected]. When I log in to the device, I flush the routes, then create the default route: admin@localhost> route -f admin@localhost> route add default 172.16.84.2 Now, in my Ubuntu machine, I use iptables to enable IP masquerading: root@Valhalla> sudo iptables -t nat -A POSTROUTING -s 172.16.84.2 -j MASQUERADE Once this is all done, the router has internet access over the USB connection to my PC. I am trying to replicate this exact setup on my MacBook now (Snow Leopard), but iptables does not exist for OSX, not even a Macports version exists. I have scoured through other questions on StackOverflow that cover the usage of the ipfw command, which apparently works as a drop-in replacement for iptables. However, the syntax is significantly different, and I'm pretty much lost. Does anyone with some experience with ipfw have some suggestions on how I could accomplish this and create a NAT connection via IP masquerading like I could with my Ubuntu PC? Thank you for your assistance.

    Read the article

  • Google Chrome - Issues with download dialogs using BinaryWrite

    - by Mila
    Hello, I have an empty ASP .NET page with the code behind to support downloading of PDF files. The page is called from a link-like web control that has NavigateUrl set to this page. In short, I am using the following for streaming: Response.Buffer = false; Response.ClearHeaders(); Response.ContentType = "application/x-pdf"; Response.AddHeader("Content-Disposition","attachment; filename="MyPDFFile.pdf"); byte[] binary = (dataReaderRemote[DataPDFFieldName]) as byte[]; //dataReaderRemote[DataPDFFieldName] has previously retrieved data if (binary != null) { MemoryStream memoryStream = new MemoryStream(binary); int sizeToWrite = CHUNKSIZE; //CHUNKSIZE=1024 for (int i = 0; i < binary.GetUpperBound(0) - 1; i = i + CHUNKSIZE) { if (!Response.IsClientConnected) return; if (i + CHUNKSIZE >= binary.Length) sizeToWrite = binary.Length - i; byte[] chunk = new byte[sizeToWrite]; memoryStream.Read(chunk, 0, sizeToWrite); Response.BinaryWrite(chunk); Response.Flush(); } } Response.Close(); IE as well as Firefox bring the download prompt window asking you whether you wish to open or save the file, while the user remains on the same page containing the link. However, Google Chrome opens a new blank tab and downloads the file automatically. Is there any way to prevent Chrome from opening the extra blank and therefore useless tab? I am using the Google Chrome version 5.0.375.55 (Official Build 47796) on Windows XP. Thanks in advance! Mila

    Read the article

  • How to send audio data from Java Applet to Rails controller

    - by cooldude
    Hi, I have to send the audio data in byte array obtain by recording from java applet at the client side to rails server at the controller in order to save. So, what encoding parameters at the applet side be used and in what form the audio data be converted like String or byte array so that rails correctly recieve data and then I can save that data at the rails in the file. As currently the audio file made by rails controller is not playing. It is the following ERROR : LAVF_header: av_open_input_stream() failed while playing with the mplayer. Here is the Java Code: package networksocket; import java.util.logging.Level; import java.util.logging.Logger; import javax.swing.JApplet; import java.net.*; import java.io.*; import java.awt.event.*; import java.awt.*; import java.sql.*; import javax.swing.*; import javax.swing.border.*; import java.awt.*; import java.util.Properties; import javax.swing.plaf.basic.BasicSplitPaneUI.BasicHorizontalLayoutManager; import sun.awt.HorizBagLayout; import sun.awt.VerticalBagLayout; import sun.misc.BASE64Encoder; /** * * @author mukand */ public class Urlconnection extends JApplet implements ActionListener { /** * Initialization method that will be called after the applet is loaded * into the browser. */ public BufferedInputStream in; public BufferedOutputStream out; public String line; public FileOutputStream file; public int bytesread; public int toread=1024; byte b[]= new byte[toread]; public String f="FINISH"; public String match; public File fileopen; public JTextArea jTextArea; public Button refreshButton; public HttpURLConnection urlConn; public URL url; OutputStreamWriter wr; BufferedReader rd; @Override public void init() { // TODO start asynchronous download of heavy resources //textField= new TextField("START"); //getContentPane().add(textField); JPanel p = new JPanel(); jTextArea= new JTextArea(1500,1500); p.setLayout(new GridLayout(1,1, 1,1)); p.add(new JLabel("Server Details")); p.add(jTextArea); Container content = getContentPane(); content.setLayout(new GridBagLayout()); // Used to center the panel content.add(p); jTextArea.setLineWrap(true); refreshButton = new java.awt.Button("Refresh"); refreshButton.reshape(287,49,71,23); refreshButton.setFont(new Font("Dialog", Font.PLAIN, 12)); refreshButton.addActionListener(this); add(refreshButton); Properties properties = System.getProperties(); properties.put("http.proxyHost", "netmon.iitb.ac.in"); properties.put("http.proxyPort", "80"); } @Override public void actionPerformed(ActionEvent e) { try { url = new URL("http://localhost:3000/audio/audiorecieve"); urlConn = (HttpURLConnection)url.openConnection(); //String login = "mukandagarwal:rammstein$"; //String encodedLogin = new BASE64Encoder().encodeBuffer(login.getBytes()); //urlConn.setRequestProperty("Proxy-Authorization",login); urlConn.setRequestMethod("POST"); // urlConn.setRequestProperty("Content-Type", //"application/octet-stream"); //urlConn.setRequestProperty("Content-Type","audio/mpeg");//"application/x-www- form-urlencoded"); //urlConn.setRequestProperty("Content-Type","application/x-www- form-urlencoded"); //urlConn.setRequestProperty("Content-Length", "" + // Integer.toString(urlParameters.getBytes().length)); urlConn.setRequestProperty("Content-Language", "UTF-8"); urlConn.setDoOutput(true); urlConn.setDoInput(true); byte bread[]=new byte[2048]; int iread; char c; String data=URLEncoder.encode("key1", "UTF-8")+ "="; //String data="key1="; FileInputStream fileread= new FileInputStream("//home//mukand//Hellion.ogg");//Dogs.mp3");//Desktop//mausam1.mp3"); while((iread=fileread.read(bread))!=-1) { //data+=(new String()); /*for(int i=0;i<iread;i++) { //c=(char)bread[i]; System.out.println(bread[i]); }*/ data+= URLEncoder.encode(new String(bread,iread), "UTF-8");//new String(new String(bread));// // data+=new String(bread,iread); } //urlConn.setRequestProperty("Content-Length",Integer.toString(data.getBytes().length)); System.out.println(data); //data+=URLEncoder.encode("mukand", "UTF-8"); //data += "&" + URLEncoder.encode("key2", "UTF-8") + "=" + URLEncoder.encode("value2", "UTF-8"); //data="key1="; wr = new OutputStreamWriter(urlConn.getOutputStream());//urlConn.getOutputStream(); //if((iread=fileread.read(bread))!=-1) // wr.write(bread,0,iread); wr.write(data); wr.flush(); fileread.close(); jTextArea.append("Send"); // Get the response rd = new BufferedReader(new InputStreamReader(urlConn.getInputStream())); while ((line = rd.readLine()) != null) { jTextArea.append(line); } wr.close(); rd.close(); //jTextArea.append("click"); } catch (MalformedURLException ex) { Logger.getLogger(Urlconnection.class.getName()).log(Level.SEVERE, null, ex); } catch (IOException ex) { Logger.getLogger(Urlconnection.class.getName()).log(Level.SEVERE, null, ex); } } @Override public void start() { } @Override public void stop() { } @Override public void destroy() { } // TODO overwrite start(), stop() and destroy() methods } Here is the Rails controller function for recieving: def audiorecieve puts "///////////////////////////////////////******RECIEVED*******////" puts params[:key1]#+" "+params[:key2] data=params[:key1] #request.env('RAW_POST_DATA') file=File.new("audiodata.ogg", 'w') file.write(data) file.flush file.close puts "////**************DONE***********//////////////////////" end Please reply quickly

    Read the article

  • Instant connection to wireless network but delayed internet access on Mediacom with Windows 7

    - by David
    I have Mediacom cable internet and their provided modem/wireless router a Cisco DPC3825. Each of the laptops experiencing the trouble have Windows 7 64-bit. When connecting to the wireless network each computer will take a second or two to connect and then toggle from "no internet access" to "internet access" however, no websites are accessible for about five minutes after connecting. After that, there aren't any problems. It happens on all 3 of the laptops I have available and none of them have problems on any other network. It seems like my phone doesn't have the delay issue when it connects. I've power cycled the modem/router along with a DNS flush. I have some of the DNS servers manually set to Google DNS addresses and one just default. I've contacted and had Mediacom support try all its tricks. They changed the SSID and password along with resetting the thing remotely a handful of times. It was installed just this month and seemed to pass the tech's checks upon installation. Nothing in the settings has been changed, but it's been exhibiting this problem from the get go. This guy seems to be having the same problem, but no solution was posted. http://www.dslreports.com/forum/r27372861-IA-Connection-to-Mediacom-wireless-Modem-no-internet- Help greatly appreciated.

    Read the article

  • Synchronization of volume snapshots when doing whole system backups

    - by intuited
    Is there a way to guarantee consistency across volumes when doing backups from LVM snapshots? Consider this scenario: Some system upgrade is in progress. It will write some files to the /usr volume, and once completed, will record success in the /var volume. As the upgrade is just about complete, I run a backup script that creates snapshots of the /usr and /var volumes, along with the rest of the system's volumes, and proceeds to create backups from those snapshots. Just before the upgrade's last write/flush on the /usr volume completes, the backup script takes its snapshot of /usr. That write completes, and the upgrade operation's success is quickly recorded in the nebulous depths of /var. The backup script takes a snapshot of /var. The backup script creates backups from the snapshots it has, er, snapshotted. So the result of all of this tomfoolery is that the resulting /usr backup contains a file which is missing a few bits, and the /var backup contains metadata indicating that that file is complete and approved for use. Without delving into the details of which operating systems' system upgrade systems would be unfazed by such trifles, is there a way to avoid such problems? At the least this seems like it could cause some application to fail unexpectedly after restoration of such a backup.

    Read the article

  • Best way to 'harden' embedded ext4 file server against unexpected loss of power?

    - by Jeremy Friesner
    Hi all, First, a little background: my company makes an audio streaming device that is a headless, rack-mounted Linux box with a couple of SSDs attached. Each SSD is formatted with ext4. The users can connect to the system using Samba/CIFS to upload new audio files or access existing ones. There is also custom software for streaming out audio over the network. This is all fine. The only problem is that the users are audio people, not computer people, and see the system as a 'black box', not as a computer. Which means that at the end of the day, they aren't going to ssh in to the box and enter "/sbin/shutdown -h"; they are just going to cut power to the rack and leave, and expect things to still work properly the next day. Since ext4 has journalling, journal checksumming, etc, this mostly works. The only time it doesn't work is when someone uploads a new file via Samba and then cuts power to the system before the uploaded data has been fully flushed to the disk. In that case, they come in the next day and find that their new file has been truncated or is missing entirely, and are unhappy. My question is, what is the best way to avoid this problem? Is there a way to get smbd to call "sync" at the end of every upload? (Performance on uploads isn't so important, since they only happen occasionally). Or is there a way to tell ext4 to automatically flush within a few seconds of any change to a file? (Again, performance can be sacrificed for safety here) Should I set a particular write-ordering mode, activate barriers, etc?

    Read the article

  • Passing PATH through sudo

    - by whitequark
    In short: how to make sudo not to flush PATH everytime? I have some websites deployed on my server (Debian testing) written with Ruby on Rails. I use Mongrel+Nginx to host them, but there is one problem that comes when I need to restart Mongrel (e.g. after making some changes). All sites are checked in VCS (git, but it is not important) and have owner and group set to my user, whereas Mongrel runs under the, huh, mongrel user that is severely restricted in it's rights. So Mongrel must be started under root (it can automatically change UID) or mongrel. To manage mongrel I use mongrel_cluster gem because it allows starting or stopping any amount of Mongrel servers with just one command. But it needs the directory /var/lib/gems/1.8/bin to be in PATH: this is not enough to start it with absolute path. Modifying PATH in root .bashrc changed nothing, tweaking sudo's env_reset and keepenv didn't either. So the question: how to add a directory to PATH or keep user's PATH in sudo?

    Read the article

  • Problem while running the j2me application

    - by Paru
    I am not able to view any content in the emulator while running the application. The Build is not failed and i am able run the application successfully. While i am closing the emulator i am getting an error. i can provide both code and log here. import javax.microedition.lcdui.; import javax.microedition.midlet.; import java.io.; import java.lang.; import javax.microedition.io.; import javax.microedition.rms.; public class Login extends MIDlet implements CommandListener { TextField ItemName=null; TextField ItemNo=null; TextField UserName=null; TextField Password=null; Form authForm,mainscreen; TextBox t = null; StringBuffer b = new StringBuffer(); private Display myDisplay = null; private Command okCommand = new Command("Login", Command.OK, 1); private Command exitCommand = new Command("Exit", Command.EXIT, 2); private Command sendCommand = new Command("Send", Command.OK, 1); private Command backCommand = new Command("Back", Command.BACK, 2); private Alert alert = null; public Login() { ItemName=new TextField("Item Name","",10,TextField.ANY); ItemNo=new TextField("Item No","",10,TextField.ANY); myDisplay = Display.getDisplay(this); UserName=new TextField("Your Name","",10,TextField.ANY); Password=new TextField("Password","",10,TextField.PASSWORD); authForm=new Form("Identification"); mainscreen=new Form("Logging IN"); mainscreen.addCommand(sendCommand); mainscreen.addCommand(backCommand); authForm.append(UserName); authForm.append(Password); authForm.addCommand(okCommand); authForm.addCommand(exitCommand); authForm.setCommandListener(this); myDisplay.setCurrent(authForm); } public void startApp() throws MIDletStateChangeException { } public void pauseApp() { } protected void destroyApp(boolean unconditional) throws MIDletStateChangeException { } public void commandAction(Command c, Displayable d) { if ((c == okCommand) && (d == authForm)) { if (UserName.getString().equals("") || Password.getString().equals("")){ alert = new Alert("Error", "You should enter Username and Password", null, AlertType.ERROR); alert.setTimeout(Alert.FOREVER); myDisplay.setCurrent(alert); } else{ //myDisplay.setCurrent(mainscreen); login(UserName.getString(),Password.getString()); } } if ((c == backCommand) && (d == mainscreen)) { myDisplay.setCurrent(authForm); } if ((c == exitCommand) && (d == authForm)) { notifyDestroyed(); } if ((c == sendCommand) && (d == mainscreen)) { if(ItemName.getString().equals("") || ItemNo.getString().equals("")){ } else{ sendItem(ItemName.getString(),ItemNo.getString()); } } } public void login(String UserName,String PassWord) { HttpConnection connection=null; DataInputStream in=null; String url="http://olario.net/submitpost/submitpost/login.php"; OutputStream out=null; try { connection=(HttpConnection)Connector.open(url); connection.setRequestMethod(HttpConnection.POST); connection.setRequestProperty("IF-Modified-Since", "2 Oct 2002 15:10:15 GMT"); connection.setRequestProperty("User-Agent","Profile/MIDP-1.0 Configuration/CLDC-1.0"); connection.setRequestProperty("Content-Language", "en-CA"); connection.setRequestProperty("Content-Length",""+ (UserName.length()+PassWord.length())); connection.setRequestProperty("username",UserName); connection.setRequestProperty("password",PassWord); out = connection.openDataOutputStream(); out.flush(); in = connection.openDataInputStream(); int ch; while((ch = in.read()) != -1) { b.append((char) ch); //System.out.println((char)ch); } //t = new TextBox("Reply",b.toString(),1024,0); //mainscreen.append(b.toString()); String auth=b.toString(); if(in!=null) in.close(); if(out!=null) out.close(); if(connection!=null) connection.close(); if(auth.equals("ok")){ mainscreen.setCommandListener(this); myDisplay.setCurrent(mainscreen); } } catch(IOException x){ } } public void sendItem(String itemname,String itemno){ HttpConnection connection=null; DataInputStream in=null; String url="http://www.olario.net/submitpost/submitpost/submitPost.php"; OutputStream out=null; try { connection=(HttpConnection)Connector.open(url); connection.setRequestMethod(HttpConnection.POST); connection.setRequestProperty("IF-Modified-Since", "2 Oct 2002 15:10:15 GMT"); connection.setRequestProperty("User-Agent","Profile/MIDP-1.0 Configuration/CLDC-1.0"); connection.setRequestProperty("Content-Language", "en-CA"); connection.setRequestProperty("Content-Length",""+ (itemname.length()+itemno.length())); connection.setRequestProperty("itemCode",itemname); connection.setRequestProperty("qty",itemno); out = connection.openDataOutputStream(); out.flush(); in = connection.openDataInputStream(); int ch; while((ch = in.read()) != -1) { b.append((char) ch); //System.out.println((char)ch); } //t = new TextBox("Reply",b.toString(),1024,0); //mainscreen.append(b.toString()); String send=b.toString(); if(in!=null) in.close(); if(out!=null) out.close(); if(connection!=null) connection.close(); if(send.equals("added")){ alert = new Alert("Error", "Send Successfully", null, AlertType.INFO); alert.setTimeout(Alert.FOREVER); myDisplay.setCurrent(alert); } } catch(IOException x){ } } } and the log is pre-init: pre-load-properties: exists.config.active: exists.netbeans.user: exists.user.properties.file: load-properties: exists.platform.active: exists.platform.configuration: exists.platform.profile: basic-init: cldc-pre-init: cldc-init: cdc-init: ricoh-pre-init: ricoh-init: semc-pre-init: semc-init: savaje-pre-init: savaje-init: sjmc-pre-init: sjmc-init: ojec-pre-init: ojec-init: cdc-hi-pre-init: cdc-hi-init: nokiaS80-pre-init: nokiaS80-init: nsicom-pre-init: nsicom-init: post-init: init: conditional-clean-init: conditional-clean: deps-jar: pre-preprocess: do-preprocess: Pre-processing 0 file(s) into /home/sreekumar/NetBeansProjects/Login/build/preprocessed directory. post-preprocess: preprocess: pre-compile: extract-libs: do-compile: post-compile: compile: pre-obfuscate: proguard-init: skip-obfuscation: proguard: post-obfuscate: obfuscate: lwuit-build: pre-preverify: do-preverify: post-preverify: preverify: pre-jar: set-password-init: set-keystore-password: set-alias-password: set-password: create-jad: add-configuration: add-profile: do-extra-libs: nokiaS80-prepare-j9: nokiaS80-prepare-manifest: nokiaS80-prepare-manifest-no-icon: nokiaS80-create-manifest: jad-jsr211-properties.check: jad-jsr211-properties: semc-build-j9: do-jar: nsicom-create-manifest: do-jar-no-manifest: update-jad: Updating application descriptor: /home/sreekumar/NetBeansProjects/Login/dist/Login.jad Generated "/home/sreekumar/NetBeansProjects/Login/dist/Login.jar" is 3501 bytes. sign-jar: ricoh-init-dalp: ricoh-add-app-icon: ricoh-build-dalp-with-icon: ricoh-build-dalp-without-icon: ricoh-build-dalp: savaje-prepare-icon: savaje-build-jnlp: post-jar: jar: pre-run: netmon.check: open-netmon: cldc-run: Copying 1 file to /home/sreekumar/NetBeansProjects/Login/dist/nbrun4244989945642509378 Copying 1 file to /home/sreekumar/NetBeansProjects/Login/dist/nbrun4244989945642509378 Jad URL for OTA execution: http://localhost:8082/servlet/org.netbeans.modules.mobility.project.jam.JAMServlet//home/sreekumar/NetBeansProjects/Login/dist//Login.jad Starting emulator in execution mode Running with storage root /home/sreekumar/j2mewtk/2.5.2/appdb/temp.DefaultColorPhone1 /home/sreekumar/NetBeansProjects/Login/nbproject/build-impl.xml:915: Execution failed with error code 143. BUILD FAILED (total time: 35 seconds)

    Read the article

  • Chrome caching 302 redirects

    - by Thermionix
    I have a php script with is used to rotate banner images on a site. Under Firefox/IE page refreshes will make another request and a different image will be returned. Under Chrome, the request seems to be cached and only opening the page in a new tab will cause it to actually query the script. I believe this used to work in older versions of chrome, I've tried a few different types of redirect codes all with the same result. Any tips? <img class="banner" src="/inc/banner.php" alt=""> ~$ cat /var/www/inc/banner.php <?php header("HTTP/1.1 302 Redirect"); header("Cache-Control: max-age=0, no-cache, no-store, must-revalidate"); //header('HTTP/1.1 307 Temporary Redirect'); //header("expires: none"); //header("expires: max"); //header("Cache-Control: public"); $folder = '../img/banner/'; $exts = 'jpg jpeg png gif'; $files = array(); $i = -1; if ('' == $folder) $folder = './'; $handle = opendir($folder); $exts = explode(' ', $exts); while (false !== ($file = readdir($handle))) { foreach($exts as $ext) { // for each extension check the extension if (preg_match('/\.'.$ext.'$/i', $file, $test)) { // faster than ereg, case insensitive $files[] = $file; // it's good ++$i; } } } closedir($handle); // We're not using it anymore mt_srand((double)microtime()*1000000); // seed for PHP < 4.2 $rand = mt_rand(0, $i); // $i was incremented as we went along header('Location: '.$folder.$files[$rand]); flush(); ?> curl output; ~$ curl -I -k https://example.net/inc/banner.php HTTP/1.1 302 Redirect Server: nginx/1.1.14 Date: Fri, 24 Feb 2012 03:23:46 GMT Content-Type: text/html Connection: keep-alive X-Powered-By: PHP/5.3.10-1ubuntu1 Cache-Control: max-age=0, no-cache, no-store, must-revalidate Location: ../img/banner/2.jpg

    Read the article

  • Cache updates when migrating DNS from one provider to another

    - by JohnCC
    This may be a Windows DNS specific question or a general DNS best practice question - I'm not sure! We migrated our 3rd party DNS provision from provider A to provider B. I noticed that our internal recursive windows DNS servers still had NS records cached for our domains pointing to provider A's servers, even though I changed the nameservers with our registrar several days ago, and even though selecting the properties of the cached records showed a TTL of 1 day. After 24 hours when the NS records in this cache have expired, will the DNS server go back to the TLD server for an update on the authority, or will it go by preference to dns1.providera.com since that is what it has cached? In this case I arranged to leave Provider A's servers up for a week to allow changes to propagate, so dns1.providera.com is still active and would still provide NS and SOA records that said that dns1.providera.com. was in charge of this domain. Given this fact, would the Windows DNS server ever go back to the TLD and pick up the authority changes, or would it just assume all was well and renew timestamps on its cached NS records? I wonder what would be the best approach to ensuring that caches pick this up. Should I:- (1) Leave Provider A's servers in place and active and wait for caches to catch up ... basically what we're doing now which seems to have issues - perhaps specifically for Windows servers, or perhaps more widely. (2) Leave Provider A's servers in place but change the NS and/or SOA information they provide to tell caches that new servers are in charge. (3) Remove Provider A's servers after 2*TTL to force remaining caches to update. The issue with (2) is that on Provider A's system I can't seem to change the NS or SOA information to anything other than their servers. The issue with (3) is that I'm not sure how a DNS server would behave in this case. When it couldn't reach the cached name servers, would it flush its cache and try a full recursive lookup, or would it just return an error, forcing the user to clear the cache manually? Thanks in advance!

    Read the article

  • Mac Mini's internet very slow, every other device fine (PC, iPhone, Xbox 360)

    - by alex
    I recently haven't used my Mac Mini for about 5 days (however it was left on). I seem to be able to connect and get great download / upload speeds through my PC, Xbox 360, iPhone and parents' laptop. However, my Mac Mini is very slow. OS X's Mail.app is downloading mail at 0.4kbps and then dropping to 0. Skype file transfers are doing the same. Browsing the net is a terrible experience. It is taking 30 seconds or more to download basic pages. All of my devices connect wirelessly to a Netgear router / modem. I have tried giving the Mac Mini a manual IP, and renew DHCP lease, as well as flush DNS in Terminal. I have also rebooted the router / modem twice, and the Mac Mini twice. Do you know what could be causing this? Thanks Update This is very weird. It is also very slow accessing localhost (setup through MAMP) and also slow to access the Netgear router config pages.

    Read the article

  • Exchange DiskShadow/Robocopy backup does not purge log files

    - by Robert Allan Hennigan Leahy
    I have a series of scripts setup to backup my Exchange. The following command is executed to start the process: diskshadow /s C:\Backup_Scripts\exchangeserverbackupscript1.dsh This is exchangeserverbackupscript1.dsh: #DiskShadow script file set verbose on #delete shadows all set context persistent writer verify {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7} set metadata C:\Backup_Scripts\shadowmetadata.cab begin backup add volume C: alias SH1 create expose %SH1% P: exec C:\Backup_Scripts\exchangeserverbackupscript1.cmd end backup delete shadows exposed P: exit #End of script And this is exchangeserverbackupscript1.cmd: robocopy "P:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group" "\\leahyfs\J$\E-Mail Backups\Day 1" /MIR /R:0 /W:0 /COPY:DT /B This is not causing Exchange to purge its log files. The edb file is 4.7 gigabytes, but the First Storage Group folder itself is 50+ gigabytes due to many, many log files for each day going back to 2009. Is there any way -- I've Googled and haven't found anything -- to notify Exchange when I've completed a full backup, and have it purge its log files? According to this and this, end backup should cause Exchange to "flush the transaction logs for that storage group" but only "if a successful backup of a storage group occurred", which leaves my question as: What constitutes a "successful backup", and why is what I'm doing not it?

    Read the article

  • Recovering data from mangodb raw files

    - by Jin Chen
    we use mongodb for our database and set the replset(two servers), but we mistakenly deleted some raw files that under /path/to/dbdata on both servers, after we used tool to get back the deleted files(we ran the extundelete on both server and mix the result together), like database.1, database.2 etc. we could not start the mongod, it raised the following error when starting mongod or executing mongodump, here is the console output: root@mongod:/opt/mongodb# mongodump --repair --dbpath /opt/mongodb -d database_production Thu Aug 21 16:22:43.258 [tools] warning: repair is a work in progress Thu Aug 21 16:22:43.258 [tools] going to try and recover data from: database_production Thu Aug 21 16:22:43.262 [tools] Assertion failure isOk() src/mongo/db/pdfile.h 392 0xde1b01 0xda42fd 0x8ae325 0x8ac492 0x8bd8e0 0x8c1c51 0x80e345 0x80e607 0x80e6a4 0x6db92a 0x6dc1ff 0x6e0db9 0xd9e45e 0x6ccdc7 0x7f499d856ead 0x6ccc29 mongodump(_ZN5mongo15printStackTraceERSo+0x21) [0xde1b01] mongodump(_ZN5mongo12verifyFailedEPKcS1_j+0xfd) [0xda42fd] mongodump(_ZNK5mongo7Forward4nextERKNS_7DiskLocE+0x1a5) [0x8ae325] mongodump(_ZN5mongo11BasicCursor7advanceEv+0x82) [0x8ac492] mongodump(_ZN5mongo8Database19clearTmpCollectionsEv+0x160) [0x8bd8e0] mongodump(_ZN5mongo14DatabaseHolder11getOrCreateERKSsS2_Rb+0x7b1) [0x8c1c51] mongodump(_ZN5mongo6Client7Context11_finishInitEv+0x65) [0x80e345] mongodump(_ZN5mongo6Client7ContextC1ERKSsS3_b+0x87) [0x80e607] mongodump(ZN5mongo6Client12WriteContextC1ERKSsS3+0x54) [0x80e6a4] mongodump(_ZN4Dump7_repairESs+0x3a) [0x6db92a] mongodump(_ZN4Dump6repairEv+0x2df) [0x6dc1ff] mongodump(_ZN4Dump3runEv+0x1b9) [0x6e0db9] mongodump(_ZN5mongo4Tool4mainEiPPc+0x13de) [0xd9e45e] mongodump(main+0x37) [0x6ccdc7] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x7f499d856ead] mongodump(__gxx_personality_v0+0x471) [0x6ccc29] assertion: 0 assertion src/mongo/db/pdfile.h:392 Thu Aug 21 16:22:43.271 dbexit: Thu Aug 21 16:22:43.271 [tools] shutdown: going to close listening sockets... Thu Aug 21 16:22:43.271 [tools] shutdown: going to flush diaglog... Thu Aug 21 16:22:43.271 [tools] shutdown: going to close sockets... Thu Aug 21 16:22:43.272 [tools] shutdown: waiting for fs preallocator... Thu Aug 21 16:22:43.272 [tools] shutdown: closing all files... Thu Aug 21 16:22:43.273 [tools] closeAllFiles() finished Thu Aug 21 16:22:43.273 [tools] shutdown: removing fs lock... Thu Aug 21 16:22:43.273 dbexit: really exiting now my env: 1) Debian 3.2.35-2 x86_64(it's a XEN virtual machine) 2) mongodb 2.4.6 and we did not delete the .0 and .ns files we tried to create a new database with the same name and copy these db.ns and db.2, db.3 to the new db, we still met the same error. is there any way to check the valid of raw .ns and datafiles, and how to recover the database?

    Read the article

  • In spite of correct DNS, Exchange sending to wrong destination server for single outbound domain

    - by beporter
    My company uses an SBS 2003 server and makes use of Exchange to host our own email. We also have a linux server hosting domains for some of our clients. In order for us to send to those clients, we had internal DNS set up to shadow the client domains to provide "correct" MX records inside our network. For example, public DNS for a domain abc.com might point to 1.2.3.4, but internally we have MX records set up to route mail for abc.com to 172.16.0.4, which is the linux email server. This setup was entirely functional; this is just back story. We've recently moved one of our client domains from our internal linux server to an external email provider. When we did that, we naturally deleted our internal shadow DNS records so our Exchange server would fetch correct (public) DNS records and route mail out to the new external host. This has NOT had any effect on Exchange though. Even after rebooting the Exchange server and completely flushing the DNS cache (nslookups on the Exchange machine itself correctly resolve to the new external address) Exchange still attempts to deliver messages for the domain to our internal server! Exchange correctly routes to all other internal and external domains when sending email. Somehow Exchange is trying to deliver to a machine that by all accounts it has no business trying to use for just this one domain. Is there a DNS cache that Exchange uses internally? Is there a way to flush that internal cache? What else could I be missing?

    Read the article

  • Simulated NAT Traversal on Virtual Box

    - by Sumit Arora
    I have installed virtual box ( with Two virtual Adapters(NAT-type)) - Host (Ubuntu -10.10) - Guest-Opensuse-11.4 . Objective : Trying to simulate all four types of NAT as defined here : https://wiki.asterisk.org/wiki/display/TOP/NAT+Traversal+Testing Simulating the various kinds of NATs can be done using Linux iptables. In these examples, eth0 is the private network and eth1 is the public network. Full-cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source iptables -t nat -A PREROUTING -i eth0 -j DNAT --to-destination Restricted cone iptables -t nat POSTROUTING -o eth1 -p tcp -j SNAT --to-source iptables -t nat POSTROUTING -o eth1 -p udp -j SNAT --to-source iptables -t nat PREROUTING -i eth1 -p tcp -j DNAT --to-destination iptables -t nat PREROUTING -i eth1 -p udp -j DNAT --to-destination iptables -A INPUT -i eth1 -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p udp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p tcp -m state --state NEW -j DROP iptables -A INPUT -i eth1 -p udp -m state --state NEW -j DROP Port-restricted cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source Symmentric echo "1" /proc/sys/net/ipv4/ip_forward iptables --flush iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE --random iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT What I did : OpenSuse guest with Two Virtual adapters - eth0 and eth1 -- eth1 with address 10.0.3.15 /eth1:1 as 10.0.3.16 -- eth0 with address 10.0.2.15 now running stund(http://sourceforge.net/projects/stun/) client/server : Server eKimchi@linux-6j9k:~/sw/stun/stund ./server -v -h 10.0.3.15 -a 10.0.3.16 Client eKimchi@linux-6j9k:~/sw/stun/stund ./client -v 10.0.3.15 -i 10.0.2.15 On all Four Cases It is giving same results : test I = 1 test II = 1 test III = 1 test I(2) = 1 is nat = 0 mapped IP same = 1 hairpin = 1 preserver port = 1 Primary: Open Return value is 0x000001 Q-1 :Please let me know If any has ever done, It should behave like NAT as per description but nowhere it working as a NAT. Q-2: How NAT Implemented in Home routers (Usually Port Restricted), but those also pre-configured iptables rules and tuned Linux

    Read the article

  • Resolve another domain from current AD domain

    - by faulty
    We have 2 AD domain setup in our office. First is the primary domain for our office and exchange. The 2nd one is for development use to simulate production environment of our clients. Both domain are hosted on Windows 2008 R2 Enterprise. We, the development team has no access to the office domain other than login and email purpose. DNS is running on PDC of both domain. Both domain does not use public domain name. Now, our machines are joined to the development domain and we use outlook to access our office's exchange. We've added DNS entries for both the domain. From time to time we are having problem resolving our office domain (i.e. during outlook login), which we need to edit our NIC's DNS to have only DNS server from our office and then flush DNS. After that switch back once it's able to resolve. Is there a permanent solution for this scenario like specifying that the office domain be resolve with another DNS server when requested from the development domain? Thanks

    Read the article

  • Microsoft SQL Server Management Studio causing system freeze

    - by CRoshanLG
    I'm experiencing very slow response from MSSMS and it causes other applications to slow down. Specially Skype crashes after few seconds from opening MSSMS, showing an error called "Disk I/O Error". I'm regularly using few applications (Sublime text, MS Word, Firefox, Outlook, Skype and one or two other apps) simultaneously. The system works fine when MSSMS is not in use! But as soon as MSSMS is opened, all the apps start to freeze (MSSMS also responds very slow). This problem has been there for about a week now (I haven't installed any apps or haven't made any changes to the system during that time). -- System Specifications -- Processor: Core i3 (3.1 GHz) RAM: 4 GB OS: Windows 7 Professional (64 bit) Free space in C drive: ~ 100 GB MS SQL Server 2008 R2 Microsoft SQL Server Management Studio version - 10.50.1600.1 I've tried to find a reason for this but there are no helpful information in the web! There are some solutions suggested (in forums and in Skype Support pages) for the Skypes' "Disk I/O Error", all of which I tried but does not solve the problem. Has anyone faced the same senario? (and hopefully) knows a solution? Systems Log I don't have much knowledge in interpreting the System Log, but I think the Critical and Warnings are not helpful. But there are lots of Error logs which might be useful. In source Kernal-General there are few similar errors saying "An I/O operation initiated by the Registry failed unrecoverably.The Registry could not flush hive (file): <some file>" In source atapi also there are few similar errors -- "The driver detected a controller error on \Device\Ide\IdePort0." (all errors has occurred in 'IdePort0') In Application Error, there are several errors logged, and following is the latest one. Both the Errors which has occurred today is similar (to this one). As it is from Ssms.exe, I guess this is relevant to the cause of problem. But as I said above I can't understand what it means!

    Read the article

  • Unexplained cache RAM drops on Linux machine

    - by FunkyChicken
    I run a CentOS 5.7 64 machine with 24gb ram and running kernel 2.6.18-274.12.1.el5. This machine runs only Nginx, php-fpm and Xcache as extra applications. Since about 3 weeks my memory behavior on this machine has changed and I cannot explain why. There are no crons running which flush anything like this. There are also no large numbers of files being deleted/changed during these drops. The 'cached' memory gets dropped about every few hours, but it's never a set gap between flushes, this indicates to me that some bottleneck gets reached instead. It also always seems to be when total memory usages gets to about 18GB, but again, not always exactly 18GB. This is a graph of my memory usage: As you can see in the graph the 'buffers' always stay more or less the same, it is mainly the 'cache' that gets dropped. Running vmstat -m I have outputted the memory usage just before and just after a memory drop. The output is here: http://pastebin.com/diff.php?i=hJqZqztm 'old version' being before, 'new version' being after a drop. About 3 weeks ago my server crashed during a heavy DDOS attack, after I rebooted the machine this odd behavior started. I have checked a bunch of logs, restarted the machine again, and cannot find any indication what changed. During these 'cache' memory drops, my iNode usage drops at the same time. Does anyone have any idea what might be causing this behavior? Clearly my RAM isn't full, so I am curious why this could be happening.

    Read the article

  • JPA : Add and remove operations on lazily initialized collection behaviour ?

    - by Albert Kam
    Hello, im currently trying out JPA 2 and using Hibernate 3.6.x as the engine. I have an entity of ReceivingGood that contains a List of ReceivingGoodDetail, and has a bidirectional relation. Some related codes for each entity follows : ReceivingGood.java @OneToMany(mappedBy="receivingGood", targetEntity=ReceivingGoodDetail.class, fetch=FetchType.LAZY, cascade = CascadeType.ALL) private List<ReceivingGoodDetail> details = new ArrayList<ReceivingGoodDetail>(); public void addReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { receivingGoodDetail.setReceivingGood(this); } void internalAddReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { this.details.add(receivingGoodDetail); } public void removeReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { receivingGoodDetail.setReceivingGood(null); } void internalRemoveReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { this.details.remove(receivingGoodDetail); } @ManyToOne @JoinColumn(name = "receivinggood_id") private ReceivingGood receivingGood; ReceivingGoodDetail.java : public void setReceivingGood(ReceivingGood receivingGood) { if (this.receivingGood != null) { this.receivingGood.internalRemoveReceivingGoodDetail(this); } this.receivingGood = receivingGood; if (receivingGood != null) { receivingGood.internalAddReceivingGoodDetail(this); } } In my experiements with both of these entities, both adding the detail to the receivingGood's collection, and even removing the detail from the receivingGood's collection, will trigger a query to fill the collection before doing the add or remove. This assumption is based on my experiments that i will paste below. My concern is that : is it ok to do changes on only a little bit of records on the collection, and the engine has to query all of the details belonging to the collection ? What if the collection would have to be filled with 1000 records when i just want to edit a single record ? Here are my experiments with the output as the comment above each method : /* Hibernate: select receivingg0_.id as id9_14_, receivingg0_.creationDate as creation2_9_14_, ... too long Hibernate: select receivingg0_.id as id10_20_, receivingg0_.creationDate as creation2_10_20_, ... too long removing existing detail from lazy collection Hibernate: select details0_.receivinggood_id as receivi13_9_8_, details0_.id as id8_, details0_.id as id10_7_, details0_.creationDate as creation2_10_7_, details0_.modificationDate as modifica3_10_7_, details0_.usercreate_id as usercreate10_10_7_, details0_.usermodify_id as usermodify11_10_7_, details0_.version as version10_7_, details0_.buyQuantity as buyQuant5_10_7_, details0_.buyUnit as buyUnit10_7_, details0_.internalQuantity as internal7_10_7_, details0_.internalUnit as internal8_10_7_, details0_.product_id as product12_10_7_, details0_.receivinggood_id as receivi13_10_7_, details0_.supplierLotNumber as supplier9_10_7_, user1_.id as id2_0_, user1_.creationDate as creation2_2_0_, user1_.modificationDate as modifica3_2_0_, user1_.usercreate_id as usercreate6_2_0_, user1_.usermodify_id as usermodify7_2_0_, user1_.version as version2_0_, user1_.name as name2_0_, user2_.id as id2_1_, user2_.creationDate as creation2_2_1_, user2_.modificationDate as modifica3_2_1_, user2_.usercreate_id as usercreate6_2_1_, user2_.usermodify_id as usermodify7_2_1_, user2_.version as version2_1_, user2_.name as name2_1_, user3_.id as id2_2_, user3_.creationDate as creation2_2_2_, user3_.modificationDate as modifica3_2_2_, user3_.usercreate_id as usercreate6_2_2_, user3_.usermodify_id as usermodify7_2_2_, user3_.version as version2_2_, user3_.name as name2_2_, user4_.id as id2_3_, user4_.creationDate as creation2_2_3_, user4_.modificationDate as modifica3_2_3_, user4_.usercreate_id as usercreate6_2_3_, user4_.usermodify_id as usermodify7_2_3_, user4_.version as version2_3_, user4_.name as name2_3_, product5_.id as id0_4_, product5_.creationDate as creation2_0_4_, product5_.modificationDate as modifica3_0_4_, product5_.usercreate_id as usercreate7_0_4_, product5_.usermodify_id as usermodify8_0_4_, product5_.version as version0_4_, product5_.code as code0_4_, product5_.name as name0_4_, user6_.id as id2_5_, user6_.creationDate as creation2_2_5_, user6_.modificationDate as modifica3_2_5_, user6_.usercreate_id as usercreate6_2_5_, user6_.usermodify_id as usermodify7_2_5_, user6_.version as version2_5_, user6_.name as name2_5_, user7_.id as id2_6_, user7_.creationDate as creation2_2_6_, user7_.modificationDate as modifica3_2_6_, user7_.usercreate_id as usercreate6_2_6_, user7_.usermodify_id as usermodify7_2_6_, user7_.version as version2_6_, user7_.name as name2_6_ from ReceivingGoodDetail details0_ left outer join COMMON_USER user1_ on details0_.usercreate_id=user1_.id left outer join COMMON_USER user2_ on user1_.usercreate_id=user2_.id left outer join COMMON_USER user3_ on user2_.usermodify_id=user3_.id left outer join COMMON_USER user4_ on details0_.usermodify_id=user4_.id left outer join Product product5_ on details0_.product_id=product5_.id left outer join COMMON_USER user6_ on product5_.usercreate_id=user6_.id left outer join COMMON_USER user7_ on product5_.usermodify_id=user7_.id where details0_.receivinggood_id=? after removing try selecting the size : 4 after removing, now flushing Hibernate: update ReceivingGood set creationDate=?, modificationDate=?, usercreate_id=?, usermodify_id=?, version=?, purchaseorder_id=?, supplier_id=?, transactionDate=?, transactionNumber=?, transactionType=?, transactionYearMonth=?, warehouse_id=? where id=? and version=? Hibernate: update ReceivingGoodDetail set creationDate=?, modificationDate=?, usercreate_id=?, usermodify_id=?, version=?, buyQuantity=?, buyUnit=?, internalQuantity=?, internalUnit=?, product_id=?, receivinggood_id=?, supplierLotNumber=? where id=? and version=? detail size : 4 */ public void removeFromLazyCollection() { String headerId = "3b373f6a-9cd1-4c9c-9d46-240de37f6b0f"; ReceivingGood receivingGood = em.find(ReceivingGood.class, headerId); // get existing detail ReceivingGoodDetail detail = em.find(ReceivingGoodDetail.class, "323fb0e7-9bb2-48dc-bc07-5ff32f30e131"); detail.setInternalUnit("MCB"); System.out.println("removing existing detail from lazy collection"); receivingGood.removeReceivingGoodDetail(detail); System.out.println("after removing try selecting the size : " + receivingGood.getDetails().size()); System.out.println("after removing, now flushing"); em.flush(); System.out.println("detail size : " + receivingGood.getDetails().size()); } /* Hibernate: select receivingg0_.id as id9_14_, receivingg0_.creationDate as creation2_9_14_, ... too long Hibernate: select receivingg0_.id as id10_20_, receivingg0_.creationDate as creation2_10_20_, ... too long adding existing detail into lazy collection Hibernate: select details0_.receivinggood_id as receivi13_9_8_, details0_.id as id8_, details0_.id as id10_7_, details0_.creationDate as creation2_10_7_, details0_.modificationDate as modifica3_10_7_, details0_.usercreate_id as usercreate10_10_7_, details0_.usermodify_id as usermodify11_10_7_, details0_.version as version10_7_, details0_.buyQuantity as buyQuant5_10_7_, details0_.buyUnit as buyUnit10_7_, details0_.internalQuantity as internal7_10_7_, details0_.internalUnit as internal8_10_7_, details0_.product_id as product12_10_7_, details0_.receivinggood_id as receivi13_10_7_, details0_.supplierLotNumber as supplier9_10_7_, user1_.id as id2_0_, user1_.creationDate as creation2_2_0_, user1_.modificationDate as modifica3_2_0_, user1_.usercreate_id as usercreate6_2_0_, user1_.usermodify_id as usermodify7_2_0_, user1_.version as version2_0_, user1_.name as name2_0_, user2_.id as id2_1_, user2_.creationDate as creation2_2_1_, user2_.modificationDate as modifica3_2_1_, user2_.usercreate_id as usercreate6_2_1_, user2_.usermodify_id as usermodify7_2_1_, user2_.version as version2_1_, user2_.name as name2_1_, user3_.id as id2_2_, user3_.creationDate as creation2_2_2_, user3_.modificationDate as modifica3_2_2_, user3_.usercreate_id as usercreate6_2_2_, user3_.usermodify_id as usermodify7_2_2_, user3_.version as version2_2_, user3_.name as name2_2_, user4_.id as id2_3_, user4_.creationDate as creation2_2_3_, user4_.modificationDate as modifica3_2_3_, user4_.usercreate_id as usercreate6_2_3_, user4_.usermodify_id as usermodify7_2_3_, user4_.version as version2_3_, user4_.name as name2_3_, product5_.id as id0_4_, product5_.creationDate as creation2_0_4_, product5_.modificationDate as modifica3_0_4_, product5_.usercreate_id as usercreate7_0_4_, product5_.usermodify_id as usermodify8_0_4_, product5_.version as version0_4_, product5_.code as code0_4_, product5_.name as name0_4_, user6_.id as id2_5_, user6_.creationDate as creation2_2_5_, user6_.modificationDate as modifica3_2_5_, user6_.usercreate_id as usercreate6_2_5_, user6_.usermodify_id as usermodify7_2_5_, user6_.version as version2_5_, user6_.name as name2_5_, user7_.id as id2_6_, user7_.creationDate as creation2_2_6_, user7_.modificationDate as modifica3_2_6_, user7_.usercreate_id as usercreate6_2_6_, user7_.usermodify_id as usermodify7_2_6_, user7_.version as version2_6_, user7_.name as name2_6_ from ReceivingGoodDetail details0_ left outer join COMMON_USER user1_ on details0_.usercreate_id=user1_.id left outer join COMMON_USER user2_ on user1_.usercreate_id=user2_.id left outer join COMMON_USER user3_ on user2_.usermodify_id=user3_.id left outer join COMMON_USER user4_ on details0_.usermodify_id=user4_.id left outer join Product product5_ on details0_.product_id=product5_.id left outer join COMMON_USER user6_ on product5_.usercreate_id=user6_.id left outer join COMMON_USER user7_ on product5_.usermodify_id=user7_.id where details0_.receivinggood_id=? after adding try selecting the size : 5 after adding, now flushing Hibernate: update ReceivingGood set creationDate=?, modificationDate=?, usercreate_id=?, usermodify_id=?, version=?, purchaseorder_id=?, supplier_id=?, transactionDate=?, transactionNumber=?, transactionType=?, transactionYearMonth=?, warehouse_id=? where id=? and version=? detail size : 5 */ public void editLazyCollection() { String headerId = "3b373f6a-9cd1-4c9c-9d46-240de37f6b0f"; ReceivingGood receivingGood = em.find(ReceivingGood.class, headerId); // get existing detail ReceivingGoodDetail detail = em.find(ReceivingGoodDetail.class, "323fb0e7-9bb2-48dc-bc07-5ff32f30e131"); detail.setInternalUnit("MCB"); System.out.println("adding existing detail into lazy collection"); receivingGood.addReceivingGoodDetail(detail); System.out.println("after adding try selecting the size : " + receivingGood.getDetails().size()); System.out.println("after adding, now flushing"); em.flush(); System.out.println("detail size : " + receivingGood.getDetails().size()); } Please share your experience on this matter ! Thank you !

    Read the article

  • why does mysql have so many more open and fragmented tables than tables in the DB?

    - by kswift
    I've been working making our database run a little smoother and had good results over the past week. But there are still some things I dont understand. For one thing, the database has 25 tables. But mysql status shows 512 are open: mysqladmin status Uptime: 212854 Threads: 1 Questions: 43041 Slow queries: 7 Opens: 2605 Flush tables: 1 Open tables: 512 Queries per second avg: 0.202 I've read that isam opens extra file descriptors and a few other reasons why the number of open tables might be higher than 25, but I am guessing that 512 is not a good thing. Any suggestions on why this might be or what I should be looking into? I've also been using mysqltuner and its been helpful. But it has consistently listed the number of fragmented tables at 207. In phpmyadmin I've selected all the tables and optimized them several times. It hasn't reduced the number of fragmented tables that mysqltuner reports. I think I am missing some important concept about how this all works. Does anyone have any suggestions to point me in the right direction or narrow down google searches or just generally help me be less clueless? Thanks!

    Read the article

  • Blocking an IP from connecting

    - by Sam W.
    I have a problem with my Apache webserver where there's and IP than connecting to my server, using alot of connection and wont die which eventually making my webserver timeout. The connection will stay as SYN_SENT state if I check using netstat -netapu I even flush my iptables and use the basic rules and it still doesn't work. The IP will get connected when I start my Apache Basic rules that I use: iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT iptables -A INPUT -s 89.149.244.117 -j REJECT iptables -A OUTPUT -s 89.149.244.117 -j REJECT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A INPUT -p tcp --dport 21 -j ACCEPT iptables -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT iptables -A INPUT -j REJECT iptables -A FORWARD -j REJECT The bold part is rule in question. Not sure this is related but tcp_syncookies value is 1. Can someone point out my mistake? Is there a way to block it for good. Thank you

    Read the article

  • Can anyone explain why my crypto++ decrypted file 16 bytes short?

    - by Tom Williams
    I suspect it might be too much to hope for, but can anyone with experience with crypto++ explain why the "decrypted.out" file created by main() is 16 characters short (which probably not coincidentally is the block size)? I think the issue must be in CryptStreamBuffer::GetNextChar(), but I've been staring at it and the crypto++ documentation for hours. Any other comments about how crummy or naive my std::streambuf implementation are also welcome ;-) And I've just noticed I'm missing some calls to delete so you don't have to tell me about those. Thanks, Tom // Runtime Includes #include <iostream> // Crypto++ Includes #include "aes.h" #include "modes.h" // xxx_Mode< > #include "filters.h" // StringSource and // StreamTransformation #include "files.h" using namespace std; class CryptStreamBuffer: public std::streambuf { public: CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c); CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c); protected: virtual int_type overflow(int_type ch = traits_type::eof()); virtual int_type uflow(); virtual int_type underflow(); virtual int_type pbackfail(int_type ch); virtual int sync(); private: int GetNextChar(); int m_NextChar; // Buffered character CryptoPP::StreamTransformationFilter* m_StreamTransformationFilter; CryptoPP::FileSource* m_Source; CryptoPP::FileSink* m_Sink; }; // class CryptStreamBuffer CryptStreamBuffer::CryptStreamBuffer(istream& encryptedInput, CryptoPP::StreamTransformation& c) : m_NextChar(traits_type::eof()), m_StreamTransformationFilter(0), m_Source(0), m_Sink(0) { m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c); m_Source = new CryptoPP::FileSource(encryptedInput, false, m_StreamTransformationFilter); } CryptStreamBuffer::CryptStreamBuffer(ostream& encryptedOutput, CryptoPP::StreamTransformation& c) : m_NextChar(traits_type::eof()), m_StreamTransformationFilter(0), m_Source(0), m_Sink(0) { m_Sink = new CryptoPP::FileSink(encryptedOutput); m_StreamTransformationFilter = new CryptoPP::StreamTransformationFilter(c, m_Sink); } CryptStreamBuffer::int_type CryptStreamBuffer::overflow(int_type ch) { return m_StreamTransformationFilter->Put((byte)ch); } CryptStreamBuffer::int_type CryptStreamBuffer::uflow() { int_type result = GetNextChar(); // Reset the buffered character m_NextChar = traits_type::eof(); return result; } CryptStreamBuffer::int_type CryptStreamBuffer::underflow() { return GetNextChar(); } CryptStreamBuffer::int_type CryptStreamBuffer::pbackfail(int_type ch) { return traits_type::eof(); } int CryptStreamBuffer::sync() { if (m_Sink) { m_StreamTransformationFilter->MessageEnd(); } } int CryptStreamBuffer::GetNextChar() { // If we have a buffered character do nothing if (m_NextChar != traits_type::eof()) { return m_NextChar; } // If there are no more bytes currently available then pump the source // *** I SUSPECT THE PROBLEM IS HERE *** if (m_StreamTransformationFilter->MaxRetrievable() == 0) { m_Source->Pump(1024); } // Retrieve the next byte byte nextByte; size_t noBytes = m_StreamTransformationFilter->Get(nextByte); if (0 == noBytes) { return traits_type::eof(); } // Buffer up the next character m_NextChar = nextByte; return m_NextChar; } void InitKey(byte key[]) { key[0] = -62; key[1] = 102; key[2] = 78; key[3] = 75; key[4] = -96; key[5] = 125; key[6] = 66; key[7] = 125; key[8] = -95; key[9] = -66; key[10] = 114; key[11] = 22; key[12] = 48; key[13] = 111; key[14] = -51; key[15] = 112; } void DecryptFile(const char* sourceFileName, const char* destFileName) { ifstream ifs(sourceFileName, ios::in | ios::binary); ofstream ofs(destFileName, ios::out | ios::binary); byte key[CryptoPP::AES::DEFAULT_KEYLENGTH]; InitKey(key); CryptoPP::ECB_Mode<CryptoPP::AES>::Decryption decryptor(key, sizeof(key)); if (ifs) { if (ofs) { CryptStreamBuffer cryptBuf(ifs, decryptor); std::istream decrypt(&cryptBuf); int c; while (EOF != (c = decrypt.get())) { ofs << (char)c; } ofs.flush(); } else { std::cerr << "Failed to open file '" << destFileName << "'." << endl; } } else { std::cerr << "Failed to open file '" << sourceFileName << "'." << endl; } } void EncryptFile(const char* sourceFileName, const char* destFileName) { ifstream ifs(sourceFileName, ios::in | ios::binary); ofstream ofs(destFileName, ios::out | ios::binary); byte key[CryptoPP::AES::DEFAULT_KEYLENGTH]; InitKey(key); CryptoPP::ECB_Mode<CryptoPP::AES>::Encryption encryptor(key, sizeof(key)); if (ifs) { if (ofs) { CryptStreamBuffer cryptBuf(ofs, encryptor); std::ostream encrypt(&cryptBuf); int c; while (EOF != (c = ifs.get())) { encrypt << (char)c; } encrypt.flush(); } else { std::cerr << "Failed to open file '" << destFileName << "'." << endl; } } else { std::cerr << "Failed to open file '" << sourceFileName << "'." << endl; } } int main(int argc, char* argv[]) { EncryptFile(argv[1], "encrypted.out"); DecryptFile("encrypted.out", "decrypted.out"); return 0; }

    Read the article

  • Postfix enable SSL 465 failed

    - by user221290
    I have installed the Postfix and enabled SSL/TLS, just tested, I can sent email from port 25, 578, but cannot sent email from port 465, the log is: May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server hello A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server done A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 flush data May 26 17:24:06 mail postfix/smtpd[28721]: SSL3 alert read:fatal:certificate unknown May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:failed in SSLv3 read client certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept error from unknown[10.155.36.240]: 0 May 26 17:24:06 mail postfix/smtpd[28721]: warning: TLS library problem: 28721:error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown:s3_pkt.c:1197:SSL alert number 46: May 26 17:24:06 mail postfix/smtpd[28721]: lost connection after CONNECT from unknown[10.155.36.240] May 26 17:24:06 mail postfix/smtpd[28721]: disconnect from unknown[10.155.36.240] My email server is: 10.155.34.117, and email client is: 10.155.36.240, the client error is: Could not connect to SMTP host: 10.155.34.117, port: 465. My Master.cf: smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes My main.cf: smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_key_file = /etc/pki/myca/mail.key smtpd_tls_cert_file = /etc/pki/myca/mail.crt smtpd_tls_CAfile = /etc/pki/myca/cacert_new.pem smtpd_tls_loglevel = 2 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_tls_session_cache_database = btree:/etc/postfix/smtpd_scache Seems it's my certificate issue, but I have tried to grant the file many times...I have no idea on this, please help!

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >