Search Results

Search found 3768 results on 151 pages for 'lite byte'.

Page 21/151 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Store a byte[] stored in a SQL XML parameter to a varbinary(MAX) field in SQL Server 2005. Can it be

    - by Mikey John
    Store a byte[] stored in a SQL XML parameter to a varbinary(MAX) field in SQL Server 2005. Can it be done ? Here's my stored procedure: set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[AddPerson] @Data AS XML AS INSERT INTO Persons (name,image_binary) SELECT rowWals.value('./@Name', 'varchar(64)') AS [Name], rowWals.value('./@ImageBinary', 'varbinary(MAX)') AS [ImageBinary] FROM @Data.nodes ('/Data/Names') as b(rowVals) SELECT SCOPE_IDENTITY() AS Id In my schema Name is of type String and ImageBinary is o type byte[].

    Read the article

  • Unable to cast object of type 'System.Byte[]' to type 'System.IConvertible'.

    - by alvn.dsza
    i get the error in following code Function ReadFile(ByVal sPath As String) As Byte Dim data As Byte data = Nothing Dim fInfo As FileInfo fInfo = New FileInfo(sPath) Dim numBytes As Long numBytes = fInfo.Length Dim fStream As FileStream fStream = New FileStream(sPath, FileMode.Open, FileAccess.Read) Dim br As BinaryReader br = New BinaryReader(fStream) data = Convert.ToByte(br.ReadBytes(numBytes)) `getting error on this line` Return data End Function

    Read the article

  • is there any faster way to parse than by walk each byte?

    - by uray
    is there any faster way to parse a text than by walk each byte of the text? I wonder if there is any special CPU (x86/x64) instruction for string operation that is used by string library, that somehow used to optimize the parsing routine. for example instruction like finding a token in a string that could be run by hardware instead of looping each byte until a token is found.

    Read the article

  • How can I write an extension method that converts a System.Drawing.Bitmap to a byte array?

    - by Patrick Szalapski
    How can I write an extension method that converts a System.Drawing.Bitmap to a byte array? Why not: <Extension()> _ Public Function ToByteArray(ByVal image As System.Drawing.Bitmap) As Byte() Using ms = New MemoryStream() image.Save(ms, image.RawFormat) Return ms.ToArray() End Using End Function Yet when I use that, I get "System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+." What am I doing wrong?

    Read the article

  • Minecraft style XNA game collision?

    - by Levi
    I've been trying to get this working for ages now, I can detect if there's a solid block at any place on the map and I can check how far something is inside of it, but I don't understand how to fix the collision. I've tried loads of ways and all of them end up by the player getting stuck, glitching around, incorrect responses and I really have no idea how to go about this :/. int Chnk = Utility.GetChunkFromPosition(origin); if (Chnk == -1) return; Vector3 Pos = Utility.GetCubeVectorFromPosition(origin); if (GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)Pos.Y, (byte)Pos.Z] != 0) { isInIllegalState = true; if (velocity.Y < 0f) velocity.Y = 0f; } while (isInIllegalState) { if (GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)origin.Y, (byte)Pos.Z] != 0) origin.Y = (int)(origin.Y + 1); else isInIllegalState = false; } if (origin.Y < Chunk.YSize - 2 && GlobalWorld.LoadedChunks[Chnk].Blocks[(byte)Pos.X, (byte)(origin.Y + playerHeight.Y), (byte)Pos.Z] != 0) { velocity.Y = 0f; //Acceleration.Y = 0f; origin.Y = (int)origin.Y;// -0.5f; } for (int x = -1; x <= 1; x+=2) { for (int z = -1; z <= 1; z += 2) { Vector3 CornerPosition = new Vector3(boundingSize * x, 0, boundingSize * z); bool CorrectX = false; bool CorrectZ = false; Vector3 RoundedOrigin = Utility.RoundVector(origin); Vector3 RoundedCorner = Utility.RoundVector(origin + CornerPosition); byte BlockAdjacent = Utility.GetCubeFromPosition(origin + CornerPosition); if (BlockAdjacent == 0) continue; if (RoundedCorner.X != RoundedOrigin.X && RoundedCorner.Z != RoundedOrigin.Z) { CorrectX = true; CorrectZ = true; } if (RoundedCorner.Z != RoundedOrigin.Z && RoundedCorner.X == RoundedOrigin.X) CorrectZ = true; if (RoundedCorner.X != RoundedOrigin.X && RoundedCorner.Z == RoundedOrigin.Z) CorrectX = true; if (CorrectX && CornerPosition.X > 0) { if (origin.X > 0f) origin.X = (int)(origin.X + 1) - boundingSize; else origin.X = (int)origin.X - boundingSize; } else if (CorrectX && CornerPosition.X < 0) { if (origin.X > 0f) origin.X = (int)(origin.X) + boundingSize; else origin.X = (int)(origin.X - 1) + boundingSize; } if (CorrectZ && CornerPosition.Z > 0) { if (origin.Z > 0f) origin.Z = (int)(origin.Z + 1) - boundingSize; else origin.Z = (int)origin.Z - boundingSize; } else if (CorrectZ && CornerPosition.Z < 0) { if (origin.Z > 0f) origin.Z = (int)(origin.Z) + boundingSize; else origin.Z = (int)(origin.Z - 1) + boundingSize; } } }

    Read the article

  • How can I write byte[] to socket outputstream and specify the end of file?

    - by Christopher Francisco
    I've googled 2 days straight and I can't find how to do this. I have an open stream between client and server, and client will send a JSON string (encrypted to bytes) to the server each 3 to 5 seconds. How can I write to the Socket OutputStream so that I can read each JSON string in the server. I'm guessing I need to specify some kind of end of file or something, but can't find any info on how to achieve this.

    Read the article

  • Storing images in file system and returning URLs or virtually resizing and returning byte arrays?

    - by ismaelf
    I need to create a REST web service to manage user submitted images and displaying them all in a website. There are multiple websites that are going to use this service to manage and display images. The requirements are to have 5 pre-defined image sizes available. The 2 options I see are the following: The web service will create the 5 images, store them in the file system and and store the URL's in the database when the user submits the image. When the image is requested, the web service will return an array of URLs. I see this option to be a little hard on the hard drive. The estimates are 10,000 users per site, and lets say, 100 sites. The heavy processing will be done when the user submits the image and each image is going to be pulled from the File System. The web service will store just the image that the user submits in the file system and it's URL in the database. When the user request images, the web service will get the info from the DB, load the image on memory, create its 5 instances and return an object with 5 image arrays (I will probably cache the arrays). This option is harder on the processor and memory. The heavy processing will be done when the images get requested. A plus I see for option 2 is that it will give me the option to rewrite the URL of the image and make them site dependent (prettier) than having a image repository for all websites. But this is not a big deal. What do you think of these options? Do you have any other suggestions?

    Read the article

  • Issues with reversing bit shifts that roll over the maximum byte size?

    - by Terri
    I have a string of binary numbers that was originally a regular string and will return to a regular string after some bit manipulation. I'm trying to do a simple caesarian shift on the binary string, and it needs to be reversable. I've done this with this method.. public static String cShift(String ptxt, int addFactor) { String ascii = ""; for (int i = 0; i < ptxt.length(); i+=8) { int character = Integer.parseInt(ptxt.substring(i, i+8), 2); byte sum = (byte) (character + addFactor); ascii += (char)sum; } String returnToBinary = convertToBinary(ascii); return returnToBinary; } This works fine in some cases. However, I think when it rolls over being representable by one byte it's irreversable. On the test string "test!22*F ", with an addFactor of 12, the string becomes irreversible. Why is that and how can I stop it?

    Read the article

  • how to convert byte array to image in java?

    - by nemade-vipin
    I am developing a Web application in Java. In that application, I have created webservices in Java. In that webservice, I have created one webmethod which returns the image list in base64 format. The return type of the method is Vector. In webservice tester I can see the SOAP response as xsi:type="xs:base64Binary". Then I called this webmethod in my application. I used the following code: SBTSWebService webService = null; List imageArray = null; List imageList = null; webService = new SBTSWebService(); imageArray = webService.getSBTSWebPort().getAddvertisementImage(); Iterator itr = imageArray.iterator(); while(itr.hasNext()) { String img = (String)itr.next(); byte[] bytearray = Base64.decode(img); BufferedImage imag=ImageIO.read(new ByteArrayInputStream(bytearray)); imageList.add(imag); } In this code I am receiving the error: java.lang.ClassCastException: [B cannot be cast to java.lang.String" on line String img = (String)itr.next(); Is there any mistake in my code? Or is there any other way to bring the image in actual format? Can you provide me the code or link through which I can resolve the above issue? Note:- I already droped this question and I got the suggetion to try the following code Object next = iter.next(); System.out.println(next.getClass()) I tried this code and got the output as byte[] from webservice. but I am not able to convert this byte array to actual image. is there any other way to bring the image in actual format? Can you provide me the code or link through which I can resolve the above issue?

    Read the article

  • What to set the scalar type to contain a byte []. Entity in MVC2

    - by Brad8118
    I'm trying out the EF 4.0 and using the Model first approach. I'd like to store images into the database and I'm not sure of the best type for the scalar in the entity. I currently have it(the image scalar type) setup as a binary. From what I have been reading the best way to store the image in the db is a byte[]. So I'm assuming that binary is the way to go. If there is a better way I'd switch. In my controller I have: //file from client to store in the db HttpPostedFileBase file = Request.Files[inputTagName]; if (file.ContentLength > 0) { keyToAdd.Image = new byte[file.ContentLength]; file.InputStream.Write(keyToAdd.Image, 0, file.ContentLength); } This builds fine but when I run it I get an exception writing the stream to keyToAdd.Image. The exception is something like: Method does not exist. Any ideas? Note that when using a EF 4.0 model first approach I only have int16, int32, double, string, decimal, binary, byte, DateTime, Double, Single, and SByte as available types. Thanks

    Read the article

  • How To Troubleshoot Excess Time From Connect to First Byte?

    - by Gaia
    I measured load times for a wordpress 2.9.2 install on apache 2.2.3 and I was intrigued by the long periods between connect and first byte for the css and image files. Load Average is 0.0, 0.0, 0.0 and there are 150MB free RAM on the VPS. Pingdom results are at http://imagebin.ca/img/6UaiOU.png How do I gain insight into the possible causes of this problem and how would I troubleshoot it? Thanks

    Read the article

  • Using AES encryption in .NET - CryptographicException saying the padding is invalid and cannot be removed

    - by Jake Petroules
    I wrote some AES encryption code in C# and I am having trouble getting it to encrypt and decrypt properly. If I enter "test" as the passphrase and "This data must be kept secret from everyone!" I receive the following exception: System.Security.Cryptography.CryptographicException: Padding is invalid and cannot be removed. at System.Security.Cryptography.RijndaelManagedTransform.DecryptData(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount, Byte[]& outputBuffer, Int32 outputOffset, PaddingMode paddingMode, Boolean fLast) at System.Security.Cryptography.RijndaelManagedTransform.TransformFinalBlock(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount) at System.Security.Cryptography.CryptoStream.FlushFinalBlock() at System.Security.Cryptography.CryptoStream.Dispose(Boolean disposing) at System.IO.Stream.Close() at System.IO.Stream.Dispose() ... And if I enter something less than 16 characters I get no output. I believe I need some special handling in the encryption since AES is a block cipher, but I'm not sure exactly what that is, and I wasn't able to find any examples on the web showing how. Here is my code: using System; using System.IO; using System.Security.Cryptography; using System.Text; public static class DatabaseCrypto { public static EncryptedData Encrypt(string password, string data) { return DatabaseCrypto.Transform(true, password, data, null, null) as EncryptedData; } public static string Decrypt(string password, EncryptedData data) { return DatabaseCrypto.Transform(false, password, data.DataString, data.SaltString, data.MACString) as string; } private static object Transform(bool encrypt, string password, string data, string saltString, string macString) { using (AesManaged aes = new AesManaged()) { aes.Mode = CipherMode.CBC; aes.Padding = PaddingMode.PKCS7; int key_len = aes.KeySize / 8; int iv_len = aes.BlockSize / 8; const int salt_size = 8; const int iterations = 8192; byte[] salt = encrypt ? new Rfc2898DeriveBytes(string.Empty, salt_size).Salt : Convert.FromBase64String(saltString); byte[] bc_key = new Rfc2898DeriveBytes("BLK" + password, salt, iterations).GetBytes(key_len); byte[] iv = new Rfc2898DeriveBytes("IV" + password, salt, iterations).GetBytes(iv_len); byte[] mac_key = new Rfc2898DeriveBytes("MAC" + password, salt, iterations).GetBytes(16); aes.Key = bc_key; aes.IV = iv; byte[] rawData = encrypt ? Encoding.UTF8.GetBytes(data) : Convert.FromBase64String(data); using (ICryptoTransform transform = encrypt ? aes.CreateEncryptor() : aes.CreateDecryptor()) using (MemoryStream memoryStream = encrypt ? new MemoryStream() : new MemoryStream(rawData)) using (CryptoStream cryptoStream = new CryptoStream(memoryStream, transform, encrypt ? CryptoStreamMode.Write : CryptoStreamMode.Read)) { if (encrypt) { cryptoStream.Write(rawData, 0, rawData.Length); return new EncryptedData(salt, mac_key, memoryStream.ToArray()); } else { byte[] originalData = new byte[rawData.Length]; int count = cryptoStream.Read(originalData, 0, originalData.Length); return Encoding.UTF8.GetString(originalData, 0, count); } } } } } public class EncryptedData { public EncryptedData() { } public EncryptedData(byte[] salt, byte[] mac, byte[] data) { this.Salt = salt; this.MAC = mac; this.Data = data; } public EncryptedData(string salt, string mac, string data) { this.SaltString = salt; this.MACString = mac; this.DataString = data; } public byte[] Salt { get; set; } public string SaltString { get { return Convert.ToBase64String(this.Salt); } set { this.Salt = Convert.FromBase64String(value); } } public byte[] MAC { get; set; } public string MACString { get { return Convert.ToBase64String(this.MAC); } set { this.MAC = Convert.FromBase64String(value); } } public byte[] Data { get; set; } public string DataString { get { return Convert.ToBase64String(this.Data); } set { this.Data = Convert.FromBase64String(value); } } } static void ReadTest() { Console.WriteLine("Enter password: "); string password = Console.ReadLine(); using (StreamReader reader = new StreamReader("aes.cs.txt")) { EncryptedData enc = new EncryptedData(); enc.SaltString = reader.ReadLine(); enc.MACString = reader.ReadLine(); enc.DataString = reader.ReadLine(); Console.WriteLine("The decrypted data was: " + DatabaseCrypto.Decrypt(password, enc)); } } static void WriteTest() { Console.WriteLine("Enter data: "); string data = Console.ReadLine(); Console.WriteLine("Enter password: "); string password = Console.ReadLine(); EncryptedData enc = DatabaseCrypto.Encrypt(password, data); using (StreamWriter stream = new StreamWriter("aes.cs.txt")) { stream.WriteLine(enc.SaltString); stream.WriteLine(enc.MACString); stream.WriteLine(enc.DataString); Console.WriteLine("The encrypted data was: " + enc.DataString); } }

    Read the article

  • Java OutOfMemoryError strange behaviour

    - by Evgeniy Dorofeev
    Assuming we have a max memory of 256M, why does this code work: public static void main(String... args) { for (int i = 0; i < 2; i++) { byte[] a1 = new byte[150000000]; } byte[] a2 = new byte[150000000]; } but this one throw an OOME? public static void main(String... args) { //for (int i = 0; i < 2; i++) { byte[] a1 = new byte[150000000]; } byte[] a2 = new byte[150000000]; }

    Read the article

  • Decompressing a very large serialized object and managing memory

    - by Mike_G
    I have an object that contains tons of data used for reports. In order to get this object from the server to the client I first serialize the object in a memory stream, then compress it using the Gzip stream of .NET. I then send the compressed object as a byte[] to the client. The problem is on some clients, when they get the byte[] and try to decompress and deserialize the object, a System.OutOfMemory exception is thrown. Ive read that this exception can be caused by new() a bunch of objects, or holding on to a bunch of strings. Both of these are happening during the deserialization process. So my question is: How do I prevent the exception (any good strategies)? The client needs all of the data, and ive trimmed down the number of strings as much as i can. edit: here is the code i am using to serialize/compress (implemented as extension methods) public static byte[] SerializeObject<T>(this object obj, T serializer) where T: XmlObjectSerializer { Type t = obj.GetType(); if (!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes; using (MemoryStream stream = new MemoryStream()) { serializer.WriteObject(stream, obj); initialBytes = stream.ToArray(); } return initialBytes; } public static byte[] CompressObject<T>(this object obj, T serializer) where T : XmlObjectSerializer { Type t = obj.GetType(); if(!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes = obj.SerializeObject(serializer); byte[] compressedBytes; using (MemoryStream stream = new MemoryStream(initialBytes)) { using (MemoryStream output = new MemoryStream()) { using (GZipStream zipper = new GZipStream(output, CompressionMode.Compress)) { Pump(stream, zipper); } compressedBytes = output.ToArray(); } } return compressedBytes; } internal static void Pump(Stream input, Stream output) { byte[] bytes = new byte[4096]; int n; while ((n = input.Read(bytes, 0, bytes.Length)) != 0) { output.Write(bytes, 0, n); } } And here is my code for decompress/deserialize: public static T DeSerializeObject<T,TU>(this byte[] serializedObject, TU deserializer) where TU: XmlObjectSerializer { using (MemoryStream stream = new MemoryStream(serializedObject)) { return (T)deserializer.ReadObject(stream); } } public static T DecompressObject<T, TU>(this byte[] compressedBytes, TU deserializer) where TU: XmlObjectSerializer { byte[] decompressedBytes; using(MemoryStream stream = new MemoryStream(compressedBytes)) { using(MemoryStream output = new MemoryStream()) { using(GZipStream zipper = new GZipStream(stream, CompressionMode.Decompress)) { ObjectExtensions.Pump(zipper, output); } decompressedBytes = output.ToArray(); } } return decompressedBytes.DeSerializeObject<T, TU>(deserializer); } The object that I am passing is a wrapper object, it just contains all the relevant objects that hold the data. The number of objects can be a lot (depending on the reports date range), but ive seen as many as 25k strings. One thing i did forget to mention is I am using WCF, and since the inner objects are passed individually through other WCF calls, I am using the DataContract serializer, and all my objects are marked with the DataContract attribute.

    Read the article

  • Why is a fixed size buffers (arrays) must be unsafe?

    - by brickner
    Let's say I want to have a value type of 7 bytes (or 3 or 777). I can define it like that: public struct Buffer71 { public byte b0; public byte b1; public byte b2; public byte b3; public byte b4; public byte b5; public byte b6; } A simpler way to define it is using a fixed buffer public struct Buffer72 { public unsafe fixed byte bs[7]; } Of course the second definition is simpler. The problem lies with the unsafe keyword that must be provided for fixed buffers. I understand that this is implemented using pointers and hence unsafe. My question is why does it have to be unsafe? Why can't C# provide arbitrary constant length arrays and keep them as a value type instead of making it a C# reference type array or unsafe buffers?

    Read the article

  • OutOfMemoryException when I read FileStream 500 MB

    - by Alhambra Eidos
    Hi all, I'm using Filestream for read big file ( 500 MB) and I get the OutOfMemoryException. Any solutions about it. My Code is: using (var fs3 = new FileStream(filePath2, FileMode.Open, FileAccess.Read)) { byte[] b2 = ReadFully(fs3, 1024); } public static byte[] ReadFully(Stream stream, int initialLength) { // If we've been passed an unhelpful initial length, just // use 32K. if (initialLength < 1) { initialLength = 32768; } byte[] buffer = new byte[initialLength]; int read = 0; int chunk; while ((chunk = stream.Read(buffer, read, buffer.Length - read)) > 0) { read += chunk; // If we've reached the end of our buffer, check to see if there's // any more information if (read == buffer.Length) { int nextByte = stream.ReadByte(); // End of stream? If so, we're done if (nextByte == -1) { return buffer; } // Nope. Resize the buffer, put in the byte we've just // read, and continue byte[] newBuffer = new byte[buffer.Length * 2]; Array.Copy(buffer, newBuffer, buffer.Length); newBuffer[read] = (byte)nextByte; buffer = newBuffer; read++; } } // Buffer is now too big. Shrink it. byte[] ret = new byte[read]; Array.Copy(buffer, ret, read); return ret; } thanks in advanced,

    Read the article

  • some unclear php "symbolics"

    - by serhio
    I am a php beginner and seed on the forum such a php expression: $regex = <<<'END' / ( [\x00-\x7F] # single-byte sequences 0xxxxxxx | [\xC0-\xDF][\x80-\xBF] # double-byte sequences 110xxxxx 10xxxxxx | [\xE0-\xEF][\x80-\xBF]{2} # triple-byte sequences 1110xxxx 10xxxxxx * 2 | [\xF0-\xF7][\x80-\xBF]{3} # quadruple-byte sequence 11110xxx 10xxxxxx * 3 ) | ( [\x80-\xBF] ) # invalid byte in range 10000000 - 10111111 | ( [\xC0-\xFF] ) # invalid byte in range 11000000 - 11111111 /x END; is this code correct? what should mean some strange (4 me) constructions like <<< 'END' / /x and END; thanks

    Read the article

  • CPU and Data alignment

    - by MS
    Dear All, Pardon me if you feel this has been answered numerous times, but I need answers to the following queries! Why data has to be aligned (on 4 byte/ 8 byte/ 2 byte boundaries)? Here my doubt is when the CPU has address lines Ax Ax-1 Ax-2 ... A2 A1 A0 then it is quite possible to address the memory locations sequentially. So why there is the need to align the data at specific boundaries? How to find the alignment requirements when I am compiling my code and generating the executatble? If for e.g the data alignment is 4 byte boundary, does that mean each consecutive byte is located at modulo 4 offsets? My doubt is if data is 4 byte aligned does that mean that if a byte is at 1004 then the next byte is at 1008 (or at 1005)? Your thoughts are much welcome. Thanks in advance! /MS

    Read the article

  • Asp.net membership salt?

    - by chobo2
    Hi Does anyone know how Asp.net membership generates their salt key and then how they encode it(ie is it salt + password or password + salt)? I am using sha1 with my membership but I would like to recreate the same salts so the built in membership stuff could hash the stuff the same way as my stuff can. Thanks Edit 2 Never Mind I mis read it and was thinking it said bytes not bit. So I was passing in 128 bytes not 128bits. Edit I been trying to make it so this is what I have public string EncodePassword(string password, string salt) { byte[] bytes = Encoding.Unicode.GetBytes(password); byte[] src = Encoding.Unicode.GetBytes(salt); byte[] dst = new byte[src.Length + bytes.Length]; Buffer.BlockCopy(src, 0, dst, 0, src.Length); Buffer.BlockCopy(bytes, 0, dst, src.Length, bytes.Length); HashAlgorithm algorithm = HashAlgorithm.Create("SHA1"); byte[] inArray = algorithm.ComputeHash(dst); return Convert.ToBase64String(inArray); } private byte[] createSalt(byte[] saltSize) { byte[] saltBytes = saltSize; RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); rng.GetNonZeroBytes(saltBytes); return saltBytes; } So I have not tried to see if the asp.net membership will recognize this yet the hashed password looks close. I just don't know how to convert it to base64 for the salt. I did this byte[] storeSalt = createSalt(new byte[128]); string salt = Encoding.Unicode.GetString(storeSalt); string base64Salt = Convert.ToBase64String(storeSalt); int test = base64Salt.Length; Test length is 172 what is well over the 128bits so what am I doing wrong? This is what their salt looks like vkNj4EvbEPbk1HHW+K8y/A== This is what my salt looks like E9oEtqo0livLke9+csUkf2AOLzFsOvhkB/NocSQm33aySyNOphplx9yH2bgsHoEeR/aw/pMe4SkeDvNVfnemoB4PDNRUB9drFhzXOW5jypF9NQmBZaJDvJ+uK3mPXsWkEcxANn9mdRzYCEYCaVhgAZ5oQRnnT721mbFKpfc4kpI=

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >