Search Results

Search found 5946 results on 238 pages for 'heavy bytes'.

Page 39/238 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Django does not load internal .css files

    - by Rubén Jiménez
    I have created a Django project in local which runs without any kind of problem. But, after an annoying and difficult Cherokee + uWSGI installation on Amazon AWS, my project does not show Django .css internal files. http://f.cl.ly/items/2Q2W3I3R0X1n2X3v0q2P/django_error.jpg <-- /Admin/ looks like The image is a screen of my /admin/, which should have a different style, but .css files are not loaded. [pid: 23206|app: 0|req: 19/19] 83.49.10.217 () {56 vars in 1121 bytes} [Sun Apr 15 05:50:24 2012] GET /static/admin/css/base.css = generated 2896 bytes in 6 msecs (HTTP/1.1 404) 1 headers in 51 bytes (1 switches on core 0) [pid: 23206|app: 0|req: 20/20] 83.49.10.217 () {56 vars in 1125 bytes} [Sun Apr 15 05:50:24 2012] GET /static/admin/css/login.css = generated 2899 bytes in 5 msecs (HTTP/1.1 404) 1 headers in 51 bytes (1 switches on core 0) This is a log from Cherokee. I don't understand why it is looking for the .css files in that path. Cherokee should be searching the files in Django original directory so i didn't change .css files in my project. Any advice? Thanks a lot.

    Read the article

  • Asp.net membership salt?

    - by chobo2
    Hi Does anyone know how Asp.net membership generates their salt key and then how they encode it(ie is it salt + password or password + salt)? I am using sha1 with my membership but I would like to recreate the same salts so the built in membership stuff could hash the stuff the same way as my stuff can. Thanks Edit 2 Never Mind I mis read it and was thinking it said bytes not bit. So I was passing in 128 bytes not 128bits. Edit I been trying to make it so this is what I have public string EncodePassword(string password, string salt) { byte[] bytes = Encoding.Unicode.GetBytes(password); byte[] src = Encoding.Unicode.GetBytes(salt); byte[] dst = new byte[src.Length + bytes.Length]; Buffer.BlockCopy(src, 0, dst, 0, src.Length); Buffer.BlockCopy(bytes, 0, dst, src.Length, bytes.Length); HashAlgorithm algorithm = HashAlgorithm.Create("SHA1"); byte[] inArray = algorithm.ComputeHash(dst); return Convert.ToBase64String(inArray); } private byte[] createSalt(byte[] saltSize) { byte[] saltBytes = saltSize; RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); rng.GetNonZeroBytes(saltBytes); return saltBytes; } So I have not tried to see if the asp.net membership will recognize this yet the hashed password looks close. I just don't know how to convert it to base64 for the salt. I did this byte[] storeSalt = createSalt(new byte[128]); string salt = Encoding.Unicode.GetString(storeSalt); string base64Salt = Convert.ToBase64String(storeSalt); int test = base64Salt.Length; Test length is 172 what is well over the 128bits so what am I doing wrong? This is what their salt looks like vkNj4EvbEPbk1HHW+K8y/A== This is what my salt looks like E9oEtqo0livLke9+csUkf2AOLzFsOvhkB/NocSQm33aySyNOphplx9yH2bgsHoEeR/aw/pMe4SkeDvNVfnemoB4PDNRUB9drFhzXOW5jypF9NQmBZaJDvJ+uK3mPXsWkEcxANn9mdRzYCEYCaVhgAZ5oQRnnT721mbFKpfc4kpI=

    Read the article

  • General String Encryption in .NET

    - by cryptospin
    I am looking for a general string encryption class in .NET. (Not to be confused with the 'SecureString' class.) I have started to come up with my own class, but thought there must be a .NET class that already allows you to encrypt/decrypt strings of any encoding with any Cryptographic Service Provider. Public Class SecureString Private key() As Byte Private iv() As Byte Private m_SecureString As String Public ReadOnly Property Encrypted() As String Get Return m_SecureString End Get End Property Public ReadOnly Property Decrypted() As String Get Return Decrypt(m_SecureString) End Get End Property Public Sub New(ByVal StringToSecure As String) If StringToSecure Is Nothing Then StringToSecure = "" m_SecureString = Encrypt(StringToSecure) End Sub Private Function Encrypt(ByVal StringToEncrypt As String) As String Dim result As String = "" Dim bytes() As Byte = Text.Encoding.UTF8.GetBytes(StringToEncrypt) Using provider As New AesCryptoServiceProvider() With provider .Mode = CipherMode.CBC .GenerateKey() .GenerateIV() key = .Key iv = .IV End With Using ms As New IO.MemoryStream Using cs As New CryptoStream(ms, provider.CreateEncryptor(), CryptoStreamMode.Write) cs.Write(bytes, 0, bytes.Length) cs.FlushFinalBlock() End Using result = Convert.ToBase64String(ms.ToArray()) End Using End Using Return result End Function Private Function Decrypt(ByVal StringToDecrypt As String) As String Dim result As String = "" Dim bytes() As Byte = Convert.FromBase64String(StringToDecrypt) Using provider As New AesCryptoServiceProvider() Using ms As New IO.MemoryStream Using cs As New CryptoStream(ms, provider.CreateDecryptor(key, iv), CryptoStreamMode.Write) cs.Write(bytes, 0, bytes.Length) cs.FlushFinalBlock() End Using result = Text.Encoding.UTF8.GetString(ms.ToArray()) End Using End Using Return result End Function End Class

    Read the article

  • Java threads, wait time always 00:00:00-Producer/Consumer

    - by user3742254
    I am currently doing a producer consumer problem with a number of threads and have had to set priorities and waits to them to ensure that one thread, the security thread, runs last. I have managed to do this and I have managed to get the buffer working. The last thing that I am required to do is to show the wait time of threads that are too large for the buffer and to calculate the average wait time. I have included code to do so, but everything I run the program, the wait time is always returned as 00:00:00, and by extension, the average is returned as the same. I was speaking to one of my colleagues who said that it is not a matter of the code but rather a matter of the computer needing to work off of one processor, which can be adjusted in the task manager settings. He has an HP like myself but his program prints the wait time 180 times, whereas mine prints usually about 3-7 times and is only 00:00:01 on one instance before finishing when I have made the processor adjustments. My other colleague has an iMac and hers puts out an average of 42:00:34(42 minutes??) I am very confused about this because I can see no difference between our codes and like my colleague said, I was wondering is it a computer issue. I am obviously concerned as I wanted to make sure that my code correctly calculated an average wait time, but that is impossible to tell when the wait times always show as 00:00:00. To calculate the thread duration, including the time it entered and exited the buffer was done by using a timestamp import, and then subtracting start time from end time. Is my code correct for this issue or is there something which is missing? I would be very grateful for any solutions. Below is my code: My buffer class package com.Com813cw; import java.text.DateFormat; import java.text.SimpleDateFormat; /** * Created by Rory on 10/08/2014. */ class Buffer { private int contents, count = 0, process = 200; private int totalRam = 1000; private boolean available = false; private long start, end, wait, request = 0; private DateFormat time = new SimpleDateFormat("ss:SSS"); public int avWaitTime =0; public void average(){ System.out.println("Average Application Request wait time: "+ time.format(request/count)); } public synchronized int get() { while (process <= 500) { try { wait(); } catch (InterruptedException e) { } } process -= 200; System.out.println("CPU After Process " + process); notifyAll(); return contents; } public synchronized void put(int value) { if (process <= 500) { process += value; } else { start = System.currentTimeMillis(); try { wait(); } catch (InterruptedException e) { } end = System.currentTimeMillis(); wait = end - start; count++; request += wait; System.out.println("Application Request Wait Time: " + time.format(wait)); process += value; contents = value; calcWait(wait, count); } notifyAll(); } public void calcWait(long wait, int count){ this.avWaitTime = (int) (wait/count); } public void printWait(){ System.out.println("Wait time is " + time.format(this.avWaitTime)); } } My spotify class package com.Com813cw; import java.sql.Timestamp; /** * Created by Rory on 11/08/2014. */ class Spotify extends Thread { private Buffer buffer; private int number; private int bytes = 250; public Spotify(Buffer c, int number) { buffer = c; this.number = number; } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 20; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes "); try { sleep(1000); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("Spotify has finished executing."); System.out.println("Time taken to execute was " + timeTaken + " milliseconds"); System.out.println("Time that Spotify thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("-----------------------------"); } } My BubbleWitch class package com.Com813cw; import java.lang.*; import java.lang.System; import java.sql.Timestamp; /** * Created by Rory on 10/08/2014. */ class BubbleWitch2 extends Thread { private Buffer buffer; private int number; private int bytes = 100; public BubbleWitch2(Buffer c, int number) { buffer = c; this.number=number ; } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 10; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes "); try { sleep(1000); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("BubbleWitch2 has finished executing."); System.out.println("Time taken to execute was " +timeTaken+ " milliseconds"); System.out.println("Time Bubblewitch2 thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("-----------------------------"); } } My Test class package com.Com813cw; /** * Created by Rory on 10/08/2014. */ public class ProducerConsumerTest { public static void main(String[] args) throws InterruptedException { Buffer c = new Buffer(); BubbleWitch2 p1 = new BubbleWitch2(c,1); Processor c1 = new Processor(c, 1); Spotify p2 = new Spotify(c, 2); SystemManagement p3 = new SystemManagement(c, 3); SecurityUpdate p4 = new SecurityUpdate(c, 4, p1, p2, p3); p1.setName("BubbleWitch2 "); p2.setName("Spotify "); p3.setName("System Management "); p4.setName("Security Update "); p1.setPriority(10); p2.setPriority(10); p3.setPriority(10); p4.setPriority(5); c1.start(); p1.start(); p2.start(); p3.start(); p4.start(); p2.join(); p3.join(); p4.join(); c.average(); System.exit(0); } } My security update package com.Com813cw; import java.lang.*; import java.lang.System; import java.sql.Timestamp; /** * Created by Rory on 11/08/2014. */ class SecurityUpdate extends Thread { private Buffer buffer; private int number; private int bytes = 150; private int process = 0; public SecurityUpdate(Buffer c, int number, BubbleWitch2 bubbleWitch2, Spotify spotify, SystemManagement systemManagement) throws InterruptedException { buffer = c; this.number = number; bubbleWitch2.join(); spotify.join(); systemManagement.join(); } long startTime = System.currentTimeMillis(); public void run() { for (int i = 0; i < 15; i++) { buffer.put(bytes); System.out.println(getName() + this.number + " put: " + bytes + " bytes"); try { sleep(1500); } catch (InterruptedException e) { } } long endTime = System.currentTimeMillis(); long timeTaken = endTime - startTime; java.util.Date date = new java.util.Date(); System.out.println("-----------------------------"); System.out.println("Security Update has finished executing."); System.out.println("Time taken to execute was " + timeTaken + " milliseconds"); System.out.println("Time that SecurityUpdate thread exited Buffer was " + new Timestamp(date.getTime())); System.out.println("------------------------------"); } } I'd be grateful as I said for any help as this is the last and most frustrating obstacle.

    Read the article

  • get and set for class in model - MVC 2 asp.net

    - by bergin
    Hi there, I want to improve the program so it has a proper constructor but also works with the models environment of MVC. I currently have: public void recordDocument(int order_id, string filename, string physical_path, string slug, int bytes) { ArchiveDocument doc = new ArchiveDocument(); doc.order_id = order_id; doc.filename = filename; doc.physical_path = physical_path; doc.slug = slug; doc.bytes = bytes; db.ArchiveDocuments.InsertOnSubmit(doc); } This obviously should be a constructor and should change to the leaner: public void recordDocument(ArchiveDocument doc) { db.ArchiveDocuments.InsertOnSubmit(doc); } with a get & set somewhere else - not sure of the syntax - do I create a partial class? so: creating in the somewhere repository - ArchiveDocument doc = new ArchiveDocument(order_id, idTaggedFilename, physical_path, slug, bytes); and then: namespace ordering.Models { public partial class ArchiveDocument { int order_id, string filename, string physical_path, string slug, int bytes; public archiveDocument(int order_id, string filename, string physical_path, string slug, int bytes){ this.order_id = order_id; etc } } How should I alter the code?

    Read the article

  • How can I tell if a byte array has already been compressed?

    - by MikeG
    Hi, Can I rely on the first few bytes of data compressed using the System.IO.Compression.DeflateStream in .NET always being the same? These bytes seem to always be the 1st bytes: 237, 189, 7, 96, 28, 73, 150, 37, 38, 47 , ... I'm assuming this is some kind of header, I'd like to assume that this header is fixed and isn't going to change. Has anyone got any extra info about this? Background info (The reason I want to know this info is...) I have a load of data in a database table that could do with being made smaller. I've decided I'm going to start compressing the data and not going to bother compressing the existing data. When the data gets into my .NET code the data is a String. I'd like to be able to look at the 1st few bytes of the string and see if it has been compressed, if it has then I need to de-compress it. I was originally thinking I could convert the string to bytes and just try de-compressing the data. Then if an exception happens, I could just assume it wasn't compressed. But I think checking the header bytes would give me much better performance. Many thanks, Mike G

    Read the article

  • Using LINQ, need help splitting a byte array on data received from Silverlight sockets

    - by gcadmes
    The message packats received contains multiple messages deliniated by a header=0xFD and a footer=0xFE // sample message packet with three // different size messages List<byte> receiveBuffer = new List<byte>(); receiveBuffer.AddRange(new byte[] { 0xFD, 1, 2, 0xFE, 0xFD, 1, 2, 3, 4, 5, 6, 7, 8, 0xFE, 0xFD, 33, 65, 25, 44, 0xFE}); // note: this sample code is without synchronization, // statements, error handling...etc. while (receiveBuffer.Count > 0) { var bytesInRange = receiveBuffer.TakeWhile(n => n != 0xFE); foreach (var n in bytesInRange) Console.WriteLine(n); // process message.. // 1) remove bytes read from receive buffer // 2) construct message object... // 3) etc... receiveBuffer.RemoveRange(0, bytesInRange.Count()); } As you can see, (including header/footer) the first message in this message packet contains 4 bytes, and the 2nd message contains 10 bytes,a and the 3rd message contains 6 bytes. In the while loop, I was expecting the TakeWhile to add the bytes that did not equal the footer part of the message. Note: Since I am removing the bytes after reading them, the header can always be expected to be at position '0'. I searched examples for splitting byte arrays, but non demonstrated splitting on arrays of unknown and fluctuating sizes. Any help will be greatly appreciated. thanks much!

    Read the article

  • Very very slow transfer speeds between Windows 7 and samba server running on Ubuntu 11.10/12.04 minimal

    - by kuzyt
    As mentioned in the title I tried transferring files between Windows 7 and the samba server running on both Ubuntu 11.10 and 12.04 but both showed very slow transfer speeds. Can someone please guide me in the right direction to debug this problem ? wget --output-document=/dev/null http://tokyo1.linode.com/100MB-tokyo.bin --2012-08-21 22:02:17-- http://tokyo1.linode.com/100MB-tokyo.bin Resolving tokyo1.linode.com (tokyo1.linode.com)... 106.187.33.12 Connecting to tokyo1.linode.com (tokyo1.linode.com)|106.187.33.12|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 104857600 (100M) [application/octet-stream] Saving to: `/dev/null' 8% [=============> ] 8,923,980 64.8K/s eta 15m 0s wlan0 IEEE 802.11abgn ESSID:"TNET" Mode:Managed Frequency:2.462 GHz Access Point: 58:6D:8F:26:20:7A Bit Rate=117 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=57/70 Signal level=-53 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:101 Invalid misc:2448 Missed beacon:0 03:00.0 Network controller: Atheros Communications Inc. AR9300 Wireless LAN adaptor (rev 01) Subsystem: Atheros Communications Inc. Device 3112 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at fea00000 (64-bit, non-prefetchable) [size=128K] Expansion ROM at fea20000 [disabled] [size=64K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1+ D2- AuxCurrent=375mA PME(D0+,D1+,D2-,D3hot+,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI: Enable- Count=1/4 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [70] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <2us, L1 <64us ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Not Supported, TimeoutDis+ DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn- Capabilities: [140 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- Capabilities: [300 v1] Device Serial Number 00-00-00-00-00-00-00-00 Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • Union struct produces garbage and general question about struct nomenclature

    - by SoulBeaver
    I read about unions the other day( today ) and tried the sample functions that came with them. Easy enough, but the result was clear and utter garbage. The first example is: union Test { int Int; struct { char byte1; char byte2; char byte3; char byte4; } Bytes; }; where an int is assumed to have 32 bits. After I set a value Test t; t.Int = 7; and then cout cout << t.Bytes.byte1 << etc... the individual bytes, there is nothing displayed, but my computer beeps. Which is fairly odd I guess. The second example gave me even worse results. union SwitchEndian { unsigned short word; struct { unsigned char hi; unsigned char lo; } data; } Switcher; Looks a little wonky in my opinion. Anyway, from the description it says, this should automatically store the result in a high/little endian format when I set the value like Switcher.word = 7656; and calling with cout << Switcher.data.hi << endl The result of this were symbols not even defined in the ASCII chart. Not sure why those are showing up. Finally, I had an error when I tried correcting the example by, instead of placing Bytes at the end of the struct, positioning it right next to it. So instead of struct {} Bytes; I wanted to write struct Bytes {}; This tossed me a big ol' error. What's the difference between these? Since C++ cannot have unnamed structs it seemed, at the time, pretty obvious that the Bytes positioned at the beginning and at the end are the things that name it. Except no, that's not the entire answer I guess. What is it then?

    Read the article

  • problem with fifos linux

    - by nunos
    I am having problem debugging why n_bytes in read_from_fifo function in client.c doesn't correspond to the value written to the fifo. It should only write 25 bytes but it tries to read a lot more (1836020505 bytes (!) to be exact). Any idea why this is happening? server.c: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <errno.h> #include <unistd.h> #include <fcntl.h> #include <sys/wait.h> #include <signal.h> #include <pthread.h> #include <sys/stat.h> typedef enum { false, true } bool; //first read the int with the number of bytes the data will have //then read that number of bytes bool read_from_fifo(int fd, char* var) { int n_bytes; if (read(fd, &n_bytes, sizeof(int))) { printf("going to read %d bytes\n", n_bytes); if (read(fd, var, n_bytes)) printf("read var\n"); else { printf("error in read var. errno: %d\n", errno); exit(-1); } } return true; } int main() { mkfifo("/tmp/foo", 0660); int fd = open("/tmp/foo", O_RDONLY); char var[100]; read_from_fifo(fd, var); printf("var: %s\n", var); return 0; } client.c: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <errno.h> #include <unistd.h> #include <fcntl.h> typedef enum { false, true } bool; //first write to fd a int with the number of bytes that will be written afterwards bool write_to_fifo(int fd, char* data) { int n_bytes = (strlen(data)) * sizeof(char); printf("going to write %d bytes\n", n_bytes); if (write(fd, &n_bytes, sizeof(int) != -1)) if (write(fd, data, n_bytes) != -1) return true; return false; } int main() { int fd = open("/tmp/foo", O_WRONLY); char data[] = "some random string abcdef"; write_to_fifo(fd, data); return 0; } Help is greatly appreciated. Thanks in advance.

    Read the article

  • Android , Read in binary data and write it to file

    - by Shpongle
    Hi all , Im trying to read in image file from a server , with the code below . It keeps going into the exception. I know the correct number of bytes are being sent as I print them out when received. Im sending the image file from python like so #open the image file and read it into an object imgfile = open (marked_image, 'rb') obj = imgfile.read() #get the no of bytes in the image and convert it to a string bytes = str(len(obj)) #send the number of bytes self.conn.send( bytes + '\n') if self.conn.sendall(obj) == None: imgfile.flush() imgfile.close() print 'Image Sent' else: print 'Error' Here is the android part , this is where I'm having the problem. Any suggestions on the best way to go about receiving the image and writing it to a file ? //read the number of bytes in the image String noOfBytes = in.readLine(); Toast.makeText(this, noOfBytes, 5).show(); byte bytes [] = new byte [Integer.parseInt(noOfBytes)]; //create a file to store the retrieved image File photo = new File(Environment.getExternalStorageDirectory(), "PostKey.jpg"); DataInputStream dis = new DataInputStream(link.getInputStream()); try{ os =new FileOutputStream(photo); byte buf[]=new byte[1024]; int len; while((len=dis.read(buf))>0) os.write(buf,0,len); Toast.makeText(this, "File recieved", 5).show(); os.close(); dis.close(); }catch(IOException e){ Toast.makeText(this, "An IO Error Occured", 5).show(); } EDIT: I still cant seem to get it working. I have been at it since and the result of all my efforts have either resulted in a file that is not the full size or else the app crashing. I know the file is not corrupt before sending server side. As far as I can tell its definitely sending too as the send all method in python sends all or throws an exception in the event of an error and so far it has never thrown an exception. So the client side is messed up . I have to send the file from the server so I cant use the suggestion suggested by Brian .

    Read the article

  • Errors when switching to specific static IP

    - by michaelc
    I had a Fedora box running using my static IP 69.169.136.6, etc, all configured according to what the ISP required. Just recently the hard drive failed (and I should have been keeping better backups) - while it is being recovered I would like to put up a webpage on my Archlinux PC explaining the problem - I presently do not have sufficient access to change the DNS record assigned to the domain. When I change my ip address while my system is running to 69.169.136.6, ifconfig reports the new ip address, but http://whatismyip.com/ does not. When I change it and reboot, I can't ping - the message I recieve is "connect: Network is unreachable" (when given one of google.com 's IP addresses - hostnames give me ping: unknown host xxx). Until I have access to the DNS system, what can I do to make this work? Edit: With new IP address, same problem, IP is now 69.169.136.29. Some commands might be useful: #ping 69.169.136.1 PING 69.169.136.1 (69.169.136.1) 56(84) bytes of data. 64 bytes from 69.169.136.1: icmp_seq=1 ttl=64 time=0.377 ms #ping 69.169.190.211 connect: Network is unreachable #ping 208.72.160.67 connect: Network is unreachable #ifconfig eth0 Link encap:Ethernet HWaddr 00:E0:4D:97:23:9B inet addr:69.169.136.29 Bcast:69.169.137.255 Mask:255.255.254.0 inet6 addr: fe80::2e0:4dff:fe97:239b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:132091 errors:0 dropped:0 overruns:0 frame:0 TX packets:17 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9635179 (9.1 Mb) TX bytes:1322 (1.2 Kb) Interrupt:29 Base address:0x6000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:48 errors:0 dropped:0 overruns:0 frame:0 TX packets:48 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2480 (2.4 Kb) TX bytes:2480 (2.4 Kb) #ip route 69.169.136.0/23 dev eth0 proto kernel scope link src 69.169.136.29 #cat /etc/resolv.conf # Generated by dhcpcd #nameserver 208.67.222.222 #nameserver 208.67.220.220 nameserver 69.169.190.211 nameserver 208.72.160.67 # /etc/resolv.conf.tail can replace this line Update: have new static IP addresses, verified to work in Windows... Relevant portions of /etc/rc.conf below: #Static IP example #eth0="eth0 69.169.136.6 netmask 255.255.254.0 broadcast 69.169.136.1" #eth0="eth0 69.169.136.29 netmask 255.255.254.0 broadcast 69.169.137.255" eth0="eth0 69.169.136.32 netmask 255.255.254.0 broadcast 69.169.137.255" #eth0="dhcp" INTERFACES=(eth0) # Routes to start at boot-up (in this order) # Declare each route then list in ROUTES # - prefix an entry in ROUTES with a ! to disable it # #gateway="default gw 192.168.0.1" gateway="default gw 69.169.136.1" #gateway="69.169.136.1" ROUTES=(!gateway) #ROUTES=()

    Read the article

  • InnoDB: Error: log file ./ib_logfile0 is of different size

    - by jack
    I just added the following lines in /etc/mysql/my.cnf after I converted one database to use InnoDB engine. innodb_buffer_pool_size = 2560M innodb_log_file_size = 256M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 16 innodb_flush_method = O_DIRECT But it raise "ERROR 2013 (HY000) at line 2: Lost connection to MySQL server during query" error restarting mysqld. And mysql error log shows the following InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! 100118 20:52:52 [ERROR] Plugin 'InnoDB' init function returned error. 100118 20:52:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100118 20:52:52 [ERROR] Unknown/unsupported table type: InnoDB 100118 20:52:52 [ERROR] Aborting So I commented out this line # innodb_log_file_size = 256M And it restarted mysql successfully. I wonder what's the "5242880 bytes of log file" showed in mysql error? It's the first database on InnoDB engine on this server so when and where is that log file created? In this case, how can I enable innodb_log_file_size directive in my.cnf? EDIT I tried to delete /var/lib/mysql/ib_logfile0 and restart mysqld but it still failed. It now shows the following in error log. 100118 21:27:11 InnoDB: Log file ./ib_logfile0 did not exist: new to be created InnoDB: Setting log file ./ib_logfile0 size to 256 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 InnoDB: Error: log file ./ib_logfile1 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! Resolution It works now after deleted both ib_logfile0 and ib_logfile1 in /var/lib/mysql

    Read the article

  • What to sign when signing a message with ws-security

    - by Heavy Bytes
    I am adding security to my web service and chose to sign the Timestamp and Token. While reading docs I found a lot of examples where they sign the Body of the SOAP message. My question is: what is best to sign? From what I understand signing the Body could lead to performance issues if the Body is pretty large. Thanks.

    Read the article

  • VirtualBox Issue: virtualbox changed my Computer Name's ip address in Windows

    - by suud
    I had installed virtualbox 4.2.2 in Windows 7. My Computer Name is: MY-PC My IP address (using ipconfig /all command) is: 192.168.1.101 My IP is dynamic and I set DNS to google dns (8.8.8.8) When I ping MY-PC, I got this result: Pinging MY-PC [192.168.56.1] with 32 bytes of data: Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 Reply from 192.168.56.1: bytes=32 time<1ms TTL=128 My virtualbox was not running and I expected the ip adress of MY-PC is 192.168.1.101, not 192.168.56.1 Then I run command: nbtstat -a MY-PC and I got this result: VirtualBox Host-Only Network: Node IpAddress: [192.168.56.1] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- MY-PC <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MY-PC <20> UNIQUE Registered MAC Address = 08-00-27-00-60-B3 Local Area Connection: Node IpAddress: [0.0.0.0] Scope Id: [] Host not found. Wireless Network Connection: Node IpAddress: [192.168.1.101] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- MY-PC <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MY-PC <20> UNIQUE Registered MAC Address = 94-0C-6D-E5-6D-5D So it seems virtualbox caused this problem. I want to know how to change back my Computer Name's ip address to 192.168.1.101 (or any ip address that set by my internet connection)?

    Read the article

  • Websphere 7 simple realm (like tomcat-users.xml)

    - by Heavy Bytes
    I am trying to port a J2EE app from Tomcat to Websphere and I'm not too familiar with Websphere. The only problem I am having is authorization (I use basic-authentication in my web.xml). In Tomcat I use the tomcat-users.xml file to define my users/passwords and to what roles they belong. How do I do this "simply" in Websphere? When deploying the EAR to Websphere it also asks me to map my role from web.xml to a user or group. Do I have to set up some sort of realm? Custom user registry? Thanks. UPDATE: I configured a Standalone custom registry, however I can't get a log-in prompt for username/password. It works just fine in Tomcat, and it doesn't in Websphere. Code from web.xml <security-constraint> <web-resource-collection> <web-resource-name>basic-auth security</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>HELLO_USER</role-name> </auth-constraint> <user-data-constraint>NONE</user-data-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> </login-config> <security-role> <role-name>HELLO_USER</role-name> </security-role>

    Read the article

  • Why does traceroute take much longer than ping?

    - by PHP
    How to explain this? C:\Documents and Settings\Administrator>tracert google.com Tracing route to google.com [64.233.189.104] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.0.1 2 7 ms <1 ms <1 ms reserve.cableplus.com.cn [218.242.223.209] 3 108 ms 135 ms 163 ms 211.154.70.10 4 * * * Request timed out. 5 2 ms * 1 ms 211.154.64.114 6 1 ms 1 ms 1 ms 211.154.72.185 7 1 ms 1 ms 1 ms 202.96.222.77 8 2 ms 1 ms 2 ms 61.152.81.145 9 1 ms 2 ms 1 ms 61.152.86.54 10 1 ms 1 ms 1 ms 202.97.33.238 11 2 ms 2 ms 2 ms 202.97.33.54 12 2 ms 1 ms 2 ms 202.97.33.5 13 33 ms 33 ms 33 ms 202.97.61.50 14 34 ms 34 ms 34 ms 202.97.62.214 15 34 ms 186 ms 37 ms 209.85.241.56 16 35 ms 35 ms 44 ms 66.249.94.34 17 34 ms 34 ms 34 ms hkg01s01-in-f104.1e100.net [64.233.189.104] Trace complete. So average time should be :1+7+108+2+1+1+2+1+1+2+2+33+34+34+35+34+34+35+34,which is a lot bigger than ping C:\Documents and Settings\Administrator>ping google.com Pinging google.com [64.233.189.104] with 32 bytes of data: Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Reply from 64.233.189.104: bytes=32 time=34ms TTL=241 Ping statistics for 64.233.189.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 34ms, Maximum = 34ms, Average = 34ms

    Read the article

  • networking through openstack installed on vm

    - by Mandar Katdare
    I am trying to set up a test installation of Openstack on a Ubuntu 12.04 VM running on a ESXi server. So far I have been able to launch the VMs on the ESXi, however am unable to assign IP addresses to them. As the VM with the Openstack installation has a single public IP, I wish to assign IPs to the VMs create through Openstack so that they can directly interact with the public network itself without having a separate private network. So I feel that bridging would not be the correct option here. But am unable to find the correct documents to go ahead with such an install. My ifconfig looks as follows: eth0 Link encap:Ethernet HWaddr 00:0c:29:6f:8a:d7 inet addr:192.168.4.167 Bcast:192.168.4.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe6f:8ad7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:391640 errors:33 dropped:98 overruns:0 frame:0 TX packets:545044 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:40303931 (40.3 MB) TX bytes:763127348 (763.1 MB) Interrupt:18 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:146127 errors:0 dropped:0 overruns:0 frame:0 TX packets:146127 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:799815763 (799.8 MB) TX bytes:799815763 (799.8 MB) virbr0 Link encap:Ethernet HWaddr 8a:80:33:32:63:a0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) The eth0 is the adapter that I intend to use for all communication. My nova.conf looks as follows: --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --allow_admin_api=true --use_deprecated_auth=false --auth_strategy=keystone --scheduler_driver=nova.scheduler.simple.SimpleScheduler --s3_host=192.168.4.167 --ec2_host=192.168.4.167 --rabbit_host=192.168.4.167 --cc_host=192.168.4.167 --nova_url=http://192.168.4.167:8774/v1.1/ --routing_source_ip=192.168.4.167 --glance_api_servers=192.168.4.167:9292 --image_service=nova.image.glance.GlanceImageService --iscsi_ip_prefix=192.168.4 --sql_connection=mysql://novadbadmin:[email protected]/nova --ec2_url=http://192.168.4.167:8773/services/Cloud --keystone_ec2_url=http://192.168.4.167:5000/v2.0/ec2tokens --api_paste_config=/etc/nova/api-paste.ini --libvirt_type=kvm --libvirt_use_virtio_for_bridges=true --start_guests_on_host_boot=true --resume_guests_state_on_host_boot=true --vnc_enabled=true --vncproxy_url=http://192.168.4.167:6080 --vnc_console_proxy_url=http://192.168.4.167:6080 # network specific settings --network_manager=nova.network.manager.FlatDHCPManager --public_interface=eth0 --vmwareapi_host_ip=192.168.4.254 --vmwareapi_host_username=**** --vmwareapi_host_password=**** --vmwareapi_wsdl_loc=http://127.0.0.1:8080/wsdl/vim25/vimService.wsdl --fixed_range=192.168.4.190/24 --floating_range=192.168.4.190/24 --network_size=32 --flat_network_dhcp_start=192.168.4.190 --flat_injected=False --force_dhcp_release --iscsi_helper=tgtadm --connection_type=vmwareapi --root_helper=sudo nova-rootwrap --verbose --libvirt_use_virtio_for_bridges --ec2_private_dns_show --novnc_enabled=true --novncproxy_base_url=http://192.168.4.167:6080/vnc_auto.html --vncserver_proxyclient_address=192.168.4.167 --vncserver_listen=192.168.4.167 192.168.4.167 is my VM with the Openstack installation and 192.168.4.254 is my ESXi server on which the VM runs. Can anyone advice me about how to proceed? Thanks, Mandar

    Read the article

  • Can ping IP address and nslookup hostname but cannot ping hostname

    - by Puddingfox
    I have a DNS server set up on one of my machines using BIND 9.7 Everything works fine with it. On my Windows 7 desktop, I have statically-assigned all network values. I have one DNS server set -- my DNS server. On my desktop, I can ping a third machine by IP fine. I can nslookup the hostname of the third machine fine. When I ping the hostname, it says it cannot find the host. / C:\Users\James>nslookup icecream Server: cake.my.domain Address: xxx.xxx.6.3 Name: icecream.my.domain Address: xxx.xxx.6.9 C:\Users\James>ping xxx.xxx.6.9 Pinging xxx.xxx.6.9 with 32 bytes of data: Reply from xxx.xxx.6.9: bytes=32 time<1ms TTL=255 Reply from xxx.xxx.6.9: bytes=32 time<1ms TTL=255 Reply from xxx.xxx.6.9: bytes=32 time<1ms TTL=255 Reply from xxx.xxx.6.9: bytes=32 time<1ms TTL=255 Ping statistics for xxx.xxx.6.9: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Users\James>ping icecream Ping request could not find host icecream. Please check the name and try again. I have also specified the search domain as my.domain xxx.xxx and my.domain substituted for security Why can I not ping by hostname? I also can not ping using the FQDN. The problem is that this problem is shared by all applications that resolve hostnames. I cannot use PuTTY to SSH to my machines by hostname; only by IP

    Read the article

  • Linux boot on a raid1 software raid ?

    - by azera
    Hello I am trying to convert my single disk boot to a raid1 boot So far here is what i have: I sucessfully create the raid 1 as degraded with the new drive alone, I copied all the data on it I can mount that raid 1, see its files etc I already have a raid5 that is working on the same box (although not booting on it) I have installed grub on both drive When grub boot, it loads the kernel alright, but during the kernel boot it fails to load the "root block device" The kernel tells me : 1 - detected that root device is an md device 2 - determining root devices 3 - mounting root 4 - mounting /dev/md125 on /newroot failed: input/output error. Please enter another root device: ... At this point, if I enter /dev/sda3 (my "old" root device that isn't converted to raid yet) everything boots fine without the root. The /dev/md125 device is indeed created but it seems to be created after the error happens, as in it creates it after loading the device, when mdadm is loaded. Somehow it looks like it can't/doesn't load the raid array before it needs to mount it, and I don't know how I can solve that. My config files (taken from the system once it boots with sda3 as root device): $ cat /etc/mdadm.conf ARRAY /dev/md/md0-r5 metadata=0.90 UUID=1a118934:c831bdb3:64188b84:66721085 ARRAY /dev/md125 metadata=0.90 UUID=48ec4190:a80d4dde:64188b84:66721085 $ cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raid0] [raid10] md125 : active raid1 sdc3[1] 477853312 blocks [2/1] [_U] md127 : active raid5 sdd[0] sdf[3] sdb[2] sde[1] 4395415488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> $ cat /boot/grub/menu.lst default 0 timeout 8 splashimage=(hd0,0)/boot/grub/splash.xpm.gz title Gentoo Linux 2.6.31-r10 root (hd0,0) #kernel /boot/kernel-genkernel-x86_64-2.6.31-gentoo-r10 root=/dev/ram0 real_root=/dev/sda3 kernel /boot/kernel-genkernel-x86_64-2.6.31-gentoo-r10 root=/dev/md125 md=125,/dev/sdc3,/dev/sda3 initrd /boot/initramfs-genkernel-x86_64-2.6.31-gentoo-r10 # blkid /dev/sda1: UUID="89fee223-b845-4e0a-8a0b-e6cf695d5bcf" TYPE="ext2" /dev/sda2: UUID="a72296a8-d7d4-447f-a34b-ee920fd1a767" TYPE="swap" /dev/sda3: UUID="97eb0a6a-c385-4a9d-bf74-c0bab1fa4dc1" TYPE="ext3" /dev/sdb: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdc1: UUID="d36537fd-19a0-b8a3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdd: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sde: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/md127: UUID="13a41589-4cf1-4c04-91ca-37484182c783" TYPE="ext4" /dev/sdf: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdc2: UUID="a1916397-1b48-45d7-9f98-73aa521e882f" TYPE="swap" /dev/sdc3: UUID="48ec4190-a80d-4dde-6418-8b8466721085" TYPE="linux_raid_member" /dev/md125: UUID="c947ed64-1d4d-4d1d-b4d2-24669fff916e" SEC_TYPE="ext2" TYPE="ext3" # mdadm -E mdadm: No devices to examine # fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe975e9fc Device Boot Start End Blocks Id System /dev/sda1 1 5 40131 83 Linux /dev/sda2 6 1311 10490445 82 Linux swap / Solaris /dev/sda3 1312 60801 477853425 83 Linux Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe975e9fc Device Boot Start End Blocks Id System /dev/sdc1 1 5 40131 83 Linux /dev/sdc2 6 1311 10490445 82 Linux swap / Solaris /dev/sdc3 1312 60801 477853425 83 Linux Disk /dev/md125: 489.3 GB, 489321791488 bytes 2 heads, 4 sectors/track, 119463328 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x00000000 Disk /dev/md125 doesn't contain a valid partition table

    Read the article

  • How to config Amazon Route53 working without www in sub-domain

    - by romuloigor
    edit: Amazon now supports this. http://aws.typepad.com/aws/2012/12/root-domain-website-hosting-for-amazon-s3.html I have my domain config in Route53 at Amazon AWS exec ping command in my domain without www $ ping gabster.com.br ping: cannot resolve gabster.com.br: Unknown host exec ping command in my domain with www $ ping www.gabster.com.br PING s3-website-sa-east-1.amazonaws.com (177.72.245.6): 56 data bytes 64 bytes from 177.72.245.6: icmp_seq=0 ttl=244 time=25.027 ms 64 bytes from 177.72.245.6: icmp_seq=1 ttl=244 time=25.238 ms 64 bytes from 177.72.245.6: icmp_seq=2 ttl=244 time=25.024 ms Route 53 - Create Record Set - Name: [ ].gabster.com.br Set CNAME value: www.gabster.com.br DISPLAY ERROR "RRSet of type CNAME with DNS name mydomin.com is not permitted at apex in zone mydomin.com"

    Read the article

  • Two network adapters on Ubuntu Server 9.10 - Can't have both working at once?

    - by Rob
    I'm trying to set up two network adapters in Ubuntu (server edition) 9.10. One for the public internet, the other a private LAN. During the install, I was asked to pick a primary network adapter (eth0 or eth1). I chose eth0, gave the installer the details listed below in the contents of /etc/network/interfaces, and carried on. I've been using this adapter with these setting for the last few days, and every thing's been fine. Today, I decide it's time to set up the local adapter. I edit the /etc/network/interfaces to add the details for eth1 (see below), and restart networking with sudo /etc/init.d/networking restart. After this, attempting to ping the machine using it's external IP address fails, but I can ping it's local IP address. If I bring eth1 down using sudo ifdown eth1, I can successfully ping the machine via it's external IP address again (but obviously not it's internal IP address). Bringing eth1 back up returns us to the original problem state: external IP not working, internal IP working. Here's my /etc/network/interfaces (I've removed the external IP information, but these settings are unchanged from when it worked) rob@rhea:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary (public) network interface auto eth0 iface eth0 inet static address xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx network xxx.xxx.xxx.xxx broadcast xxx.xxx.xxx.xxx gateway xxx.xxx.xxx.xxx # The secondary (private) network interface auto eth1 iface eth1 inet static address 192.168.99.4 netmask 255.255.255.0 network 192.168.99.0 broadcast 192.168.99.255 gateway 192.168.99.254 I then do this: rob@rhea:~$ sudo /etc/init.d/networking restart * Reconfiguring network interfaces... [ OK ] rob@rhea:~$ sudo ifup eth0 ifup: interface eth0 already configured rob@rhea:~$ sudo ifup eth1 ifup: interface eth1 already configured Then, from another machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for [external ip]: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), Back on the Ubuntu server in question: rob@rhea:~$ sudo ifdown eth1 ... and again on the other machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Ping statistics for [external ip]: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms So... what am I doing wrong?

    Read the article

  • Linux Software RAID recovery

    - by Zoredache
    I am seeing a discrepancy between the output of mdadm --detail and mdadm --examine, and I don't understand why. This output mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Array Size : 3662760640 (3493.08 GiB 3750.67 GB) Used Dev Size : 1465104256 (1397.23 GiB 1500.27 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Seems to contradict this. (the same for every disk in the array) mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 1f54d708:60227dd6:163c2a05:89fa2e07 (local to host) Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Used Dev Size : 1465104320 (1397.23 GiB 1500.27 GB) Array Size : 2930208640 (2794.46 GiB 3000.53 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 The array was created like this. mdadm -v --create /dev/md2 \ --level=raid10 --layout=o2 --raid-devices=5 \ --chunk=64 --metadata=0.90 \ /dev/sdg2 /dev/sdf2 /dev/sde2 /dev/sdd2 /dev/sdc2 Each of the 5 individual drives have partitions like this. Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057754 Device Boot Start End Blocks Id System /dev/sdc1 2048 34815 16384 83 Linux /dev/sdc2 34816 2930243583 1465104384 fd Linux raid autodetect Backstory So the SATA controller failed in a box I provide some support for. The failure was a ugly and so individual drives fell out of the array a little at a time. While there are backups, we the are not really done as frequently as we really need. There is some data that I am trying to recover if I can. I got additional hardware and I was able to access the drives again. The drives appear to be fine, and I can get the array and filesystem active and mounted (using read-only mode). I am able to access some data on the filesystem and have been copying that off, but I am seeing lots of errors when I try to copy the most recent data. When I am trying to access that most recent data I am getting errors like below which makes me think that the array size discrepancy may be the problem. Mar 14 18:26:04 server kernel: [351588.196299] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.196309] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.196313] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.199260] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.199264] dm-7: rw=0, want=20647626304, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.202446] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.202450] dm-7: rw=0, want=19973212288, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.205516] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.205520] dm-7: rw=0, want=8009695096, limit=6442450944

    Read the article

  • DRBD Not syncing between my nodes

    - by Mike Curry
    Some version info: Operating system is Ubuntu 11.10, on EC2, kernel is 3.0.0-16-virtual and the application info is: Version: 8.3.11 (api:88) GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by buildd@allspice, 2011-07-05 19:51:07 Getting some strange errors in dmesg (seen below) as well, there is no replication happening. I have made my first node primary and its showing: drbd driver loaded OK; device status: version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 m:res cs ro ds p mounted fstype 0:r0 StandAlone Primary/Unknown UpToDate/DUnknown r----s ext3 my secondary node is showing: drbd driver loaded OK; device status: version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 m:res cs ro ds p mounted fstype 0:r0 StandAlone Secondary/Unknown Inconsistent/DUnknown r----s Showing /proc/drbd on the master shows: version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown r----s ns:0 nr:0 dw:4 dr:1073 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:262135964 Showing /proc/drbd on the slave shows that there is nothing being transfered... version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 0: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown r----s ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:262135964 Here is my config... resource r0 { protocol C; startup { wfc-timeout 15; degr-wfc-timeout 60; } net { cram-hmac-alg sha1; shared-secret "test123; } on drbd01 { device /dev/drbd0; disk /dev/xvdm; address 23.XX.XX.XX:7788; # blocked out ip meta-disk internal; } on drbd02 { device /dev/drbd0; disk /dev/xvdm; address 184.XX.XX.XX:7788; #blocked out ip meta-disk internal; } } I have run the following on the master: sudo drbdadm -- --overwrite-data-of-peer primary all There is no firewall between the systems. Here is the dmesg with some errors: [2285172.969955] drbd: initialized. Version: 8.3.11 (api:88/proto:86-96) [2285172.969960] drbd: srcversion: DA5A13F16DE6553FC7CE9B2 [2285172.969962] drbd: registered as block device major 147 [2285172.969965] drbd: minor_table @ 0xffff88000276ea00 [2285173.000952] block drbd0: Starting worker thread (from drbdsetup [1300]) [2285173.003971] block drbd0: disk( Diskless -> Attaching ) [2285173.006150] block drbd0: No usable activity log found. [2285173.006154] block drbd0: Method to ensure write ordering: flush [2285173.006158] block drbd0: max BIO size = 4096 [2285173.006165] block drbd0: drbd_bm_resize called with capacity == 524271928 [2285173.008512] block drbd0: resync bitmap: bits=65533991 words=1023969 pages=2000 [2285173.008518] block drbd0: size = 250 GB (262135964 KB) [2285173.079566] block drbd0: bitmap READ of 2000 pages took 17 jiffies [2285173.081189] block drbd0: recounting of set bits took additional 1 jiffies [2285173.081194] block drbd0: 250 GB (65533991 bits) marked out-of-sync by on disk bit-map. [2285173.081203] block drbd0: Suspended AL updates [2285173.081210] block drbd0: disk( Attaching -> UpToDate ) [2285173.081214] block drbd0: attached to UUIDs 1C1291D39584C1D1:0000000000000004:0000000000000000:0000000000000000 [2285173.095016] block drbd0: conn( StandAlone -> Unconnected ) [2285173.095046] block drbd0: Starting receiver thread (from drbd0_worker [1301]) [2285173.099297] block drbd0: receiver (re)started [2285173.099304] block drbd0: conn( Unconnected -> WFConnection ) [2285173.099330] block drbd0: bind before connect failed, err = -99 [2285173.099346] block drbd0: conn( WFConnection -> Disconnecting ) [2285173.295788] block drbd0: Discarding network configuration. [2285173.295815] block drbd0: Connection closed [2285173.295826] block drbd0: conn( Disconnecting -> StandAlone ) [2285173.295840] block drbd0: receiver terminated [2285173.295844] block drbd0: Terminating drbd0_receiver Edit: Reading some other similar issues, it was suggested to do a 'drbdadm dump all', so I figured it couldn't hurt. ubuntu@drbd01:~$ drbdadm dump all /etc/drbd.conf:19: in resource r0, on drbd01: IP 23.XX.XX.XX not found on this host. and on slave: root@drbd02:~# drbdadm dump all /etc/drbd.conf:25: in resource r0, on drbd02: IP 184.XX.XX.XX not found on this host. Strange it doesn't find its own ip, however, this is an Amazon EC2 system using an elastic IP... here are my ipconfigs for both... master: ubuntu@drbd01:~$ ifconfig eth0 Link encap:Ethernet HWaddr 22:00:0a:1c:27:11 inet addr:10.28.39.17 Bcast:10.28.39.63 Mask:255.255.255.192 inet6 addr: fe80::2000:aff:fe1c:2711/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1569 errors:0 dropped:0 overruns:0 frame:0 TX packets:1169 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:124409 (124.4 KB) TX bytes:213601 (213.6 KB) Interrupt:26 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) slave: root@drbd02:~# ifconfig eth0 Link encap:Ethernet HWaddr 12:31:3f:00:14:9d inet addr:10.160.27.107 Bcast:10.160.27.255 Mask:255.255.254.0 inet6 addr: fe80::1031:3fff:fe00:149d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:915 errors:0 dropped:0 overruns:0 frame:0 TX packets:774 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:75381 (75.3 KB) TX bytes:109673 (109.6 KB) Interrupt:26 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

    Read the article

  • Accidently overwrote system.dbf - What now?

    - by Filip Ekberg
    I accidentally overwrote system.dbf in /usr/lib/oracle/xe/oradata/XE/system.dbf Well I did not actually do it accidentally, however I overwrote it because of other failures in the database. And when I try running the following: SQL> shutdown ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 289406976 bytes Fixed Size 1258488 bytes Variable Size 92277768 bytes Database Buffers 192937984 bytes Redo Buffers 2932736 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open Now I want to try to Recover the database because starting it in mounted or standard surely doesn't work. SQL> recover database using backup controlfile; ORA-00283: recovery session canceled due to errors ORA-01110: data file 1: '/usr/lib/oracle/xe/oradata/XE/system.dbf' ORA-01122: database file 1 failed verification check ORA-01110: data file 1: '/usr/lib/oracle/xe/oradata/XE/system.dbf' ORA-01206: file is not part of this database - wrong database id How do I solve this? Is it even possible? My "real" problem was that I ran the /etc/init.d/oracle-xe configure and it overwrote my old configuration and probably removed passwords and such so my tables were gone, however I found the mytablespace.dbf so I hope that it is possible to recover? Please shed some light on this.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >