Search Results

Search found 8219 results on 329 pages for 'less'.

Page 168/329 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Managing Unique IDs in stateless (web) DB4O applications

    - by Phil.Wheeler
    I'm playing around with building a new web application using DB4O - piles of fun and some really interesting stuff learned. The one thing I'm struggling with is DB4O's current lack of support for stateless applications (i.e. web apps, mostly) and the need for automatically-generated IDs. There are a number of creative and interesting approaches that I've been able to find that hook into DB4O's events, use GUIDs rather than numeric IDs or for whatever reason avoid using any system of ID at all. While each approach has its merits, I'm wondering if the less-elegant approach might equally be the best fit. Consider the following pseudo-code: If ID == 0 or null Set ID = (typeof(myObject)).Count myObject.Save It seems like such a blindingly simple approach, it's usually about here that I start thinking, "I've missed something really obvious". Have I?

    Read the article

  • How to find two most distant points?

    - by depesz
    This is a question that I was asked on a job interview some time ago. And I still can't figure out sensible answer. Question is: you are given set of points (x,y). Find 2 most distant points. Distant from each other. For example, for points: (0,0), (1,1), (-8, 5) - the most distant are: (1,1) and (-8,5) because the distance between them is larger from both (0,0)-(1,1) and (0,0)-(-8,5). The obvious approach is to calculate all distances between all points, and find maximum. The problem is that it is O(n^2), which makes it prohibitively expensive for large datasets. There is approach with first tracking points that are on the boundary, and then calculating distances for them, on the premise that there will be less points on boundary than "inside", but it's still expensive, and will fail in worst case scenario. Tried to search the web, but didn't find any sensible answer - although this might be simply my lack of search skills.

    Read the article

  • Best Online Programming Degree? (Masters Level)

    - by Jason
    I am less then a year from graduating with my bachelors in web development, however I would like to continue on with a masters in programming. As far as I can tell, what I want is a masters in software engineering. Sadly my current college only offers more management oriented masters level degrees, and I want something that is programming, not business. Ultimately my goal is to teach online and work freelance on the side. Here's the problem - I am visually impaired, so I do not drive and I prefer to take my classes entirely online. I have heard enough to avoid university of phoenix... and I have heard some good about walden, regis, and penn state's online MSSE programs. I am wondering if anyone here knows of any other good ones, or ones I should avoid. I have heard mixed reviews of Colorado Tech.

    Read the article

  • How to deploy and register a VSPackage supporting multiple versions of Visual Studio (2005, 2008, 20

    - by Steve Cadwallader
    I have an open source VSPackage that I would like to release with support for Visual Studio 2005, Visual Studio 2008, and Visual Studio 2010. I'm trying to figure out how to create the installer and how to perform the package registration with each edition of Visual Studio. The deployment research I've done indicates my best bet for an installer is a VSIX inside an MSI. The registration research I've done is a lot less clear. VSPackage registration seems to differ for every edition (VS2005 uses regpkg, VS2008 uses pkgdef, VS2010 uses VSIX). Can anyone share their experiences and/or point me towards any information about the best approach for targeting multiple versions of Visual Studio? I'm looking for the easiest implementation and preferably keeping it in a single installer if reasonably feasible. Any help would be greatly appreciated!

    Read the article

  • sending instant messages through python (msn)

    - by code_by_night
    ok I am well aware there are many other questions about this, but I have been searching and have yet to find a solid proper answer that doesnt revolve around jabber or something worse. (no offense to jabber users, just I don't want all the extras that come with it) I currently have msnp and twisted.words, I simply want to send and receive messages, have read many examples that have failed to work, and msnp is poorly documented. My preference is msnp as it requires much less code, I'm not looking for something complicated. Using this code I can login, and view my friends that are online (can't send them messages though.): import msnp import time, threading msn = msnp.Session() msn.login('[email protected]', 'XXXXXX') msn.sync_friend_list() class MSN_Thread(threading.Thread): def run(self): msn.start_chat("[email protected]") #this does not work while True: msn.process() time.sleep(1) start_msn = MSN_Thread() start_msn.start() I hope I have been clear enough, its pretty late and my head is not in a clear state after all this msn frustration. edit: since it seems msnp is extremely outdated could anyone recommend with simple examples on how I could achieve this? Don't need anything fancy that requires other accounts.

    Read the article

  • Current state of client-side XSLT

    - by Casey
    Last I heard, Blizzard was one of the few companies to put client-side XSLT into practice (2008). Is this still the case in 2011, or are more people now exploring this technique in production?  It seems that modern browsers (IE9, FF4, Chrome) and client processing power are primed to exploit this standard for tangible savings in server CPU power and bandwidth on large scale properties. Am I missing something? The negative aspects I'm aware of include * additional rendering time * additional assets required on uncached page load * additional layer of complexity * noticably less developer experience than server-side template techniques The benefits I perceive include * distributed template composition (offloaded on the client) * caching of common template fragments offloaded on the client * logical separation of document structure and data * well-documented web standard supported by all modern browsers Finally, although I know it's impossible to predict the future, I am curious to know opinions on whether or not client-side XSLT's day will come. With interest in HTML5 driving users to upgrade their browsers and developers to explore new techniques, I would say yes. How about you? Thanks in advance, Casey

    Read the article

  • groovy thread for urls

    - by Srinath
    I wrote logic for testing urls using threads. This works good for less number of urls and failing with more than 400 urls to check . class URL extends Thread{ def valid def url URL( url ) { this.url = url } void run() { try { def connection = url.toURL().openConnection() connection.setConnectTimeout(10000) if(connection.responseCode == 200 ){ valid = Boolean.TRUE }else{ valid = Boolean.FALSE } } catch ( Exception e ) { valid = Boolean.FALSE } } } def threads = []; urls.each { ur - def reader = new URL(ur) reader.start() threads.add(reader); } while (threads.size() 0) { for(int i =0; i < threads.size();i++) { def tr = threads.get(i); if (!tr.isAlive()) { if(tr.valid == true){ threads.remove(i); i--; }else{ threads.remove(i); i--; } } } Could any one please tell me how to optimize the logic and where i was going wrong . thanks in advance.

    Read the article

  • Objective C - RegexKitLite - Parsing inner contents of a string, ie: start(.*?)end

    - by Stu
    Please consider the following: NSString *myText = @"mary had a little lamb"; NSString *regexString = @"mary(.*?)little"; for)NSString *match in [myText captureComponentsMatchedByRegex:regexString]){ NSLog(@"%@",match); } This will output to the console two things: 1) "mary had a little" 2) "had a" What I want is just the 2nd bit of information "had a". Is there is a way of matching a string and returning just the inner part? I'm fairly new to Objective C, this feels a rather trivial question yet I can't find a less messy way of doing this than incrementing an integer in the for loop and on the second iteration storing the "had a" in an NSString.

    Read the article

  • Android Keep Activity from Dieing

    - by GuyNoir
    The main Activity I use in my Android application uses a fair amount of memory, meaning that on a less powerful phone, it is susceptible to being killed off when not at the front. Normally this is fine, but it also happens when I am still inside my application, but have a different activity at the top of the stack (such as a preference activity). Obviously it's a problem if my application is killed while the user is still running it. Is there any way to disable the OS's ability to kill off the application for low memory problems? Thanks.

    Read the article

  • clam anti-virus is slowing down my server performance

    - by Scarface
    Hey guys, I just installed clam av http://sourceforge.net/projects/php-clamav/ for scanning file uploads on my linux VPN running php. The problem is that for some reason just initiating the extension in the php ini file slows down my entire network. Regular requests such as changing pages that should take less than 1 second take 5. Has anyone ever experienced this before or have a good virus scanning alternative for scanning file uploads? extension=clamav.so [clamav] clamav.dbpath="/usr/share/clamav" clamav.keeptmp=20 clamav.maxreclevel=16 clamav.maxfiles=10000 clamav.maxfilesize=26214400 clamav.maxscansize=104857600 clamav.keeptmp=0

    Read the article

  • Multiple routers, subnets, gateways etc

    - by allentown
    My current setup is: Cable modem dishes out 13 static IP's (/28), a GB switch is plugged into the cable modem, and has access to those 13 static IP's, I have about 6 "servers" in use right now. The cable modem is also a firewall, DHCP server, and 3 port 10/100 switch. I am using it as a firewall, but not currently as a DHCP server. I have plugged into the cable modem, two network cables, one which goes to the WAN port of a Linksys Dual Band Wireless 10/100/1000 router/switch. Into the linksys are a few workstations, a few printers, and some laptops connecting to wifi. I set the Linksys to use take static IP, and enabled DHCP for the workstations, printers, etc in 192.168.1.1/24. The network for the Linksys is mostly self contained, backups go to a SAN, on that network, it all happens through that switch, over GB. But I also get internet access from it as well via the cable modem using one static IP. This all works, however, I can not "see" the static IP machines when I am on the Linksys. I can get to them via ssh and other protocols, and if I want to from "outside", I open holes, like 80, 25, 587, 143, 22, etc. The second wire, from the cable modem/fireall/switch just uplinks to the managed GB switch. What are the pros and cons of this? I do not like giving up the static IP to the Linksys. I basically have a mixed network of public servers, and internal workstations. I want the public servers on public IP's because I do not want to mess with port forwarding and mappings. Is it correct also, that if someone breaches the Linksys wifi, they still would have a hard time getting to the static IP range, just by nature of the network topology? Today, just for a test, I toggled on the DHCP in the firewall/cable modem at 10.1.10.1/24 range, the Linksys is n the 192.168.1.100/24 range. At that point, all the static IP machines still had in and out access, but Linksys was unreachable. The cable modem only has 10/100 ports, so I will not plug anything but the network drop into it, which is 50Mb/10Mb. Which makes me think this could be less than ideal, as transfers from the workstation network to the server network will be bottlenecked at 100Mb when I have 1000Mb available. I may not need to solve that, if isolation is better though. I do not move a lot of data, if any, from Linsys network to server network, so for it to pretend to be remote is ok. Should I approach this any different? I could enable DHCP on the cable modem/firewall, it should still send out the statics to the GB switch, but will also be a DHCP in 10.1.10.1/24 range? I can then plug the Linksys into the GB switch, which is now picking up statics and the 10.1.10.1/24 ranges, tell the Linksys to use 10.1.10.5 or so. Now, do I disable DHCP on the Linksys, and the cable modem/firewall will pass through the statics and 10.0.10.1/24 ranges as well? Or, could I open a second DHCP pool on the Linksys? I guess doing so gives me network isolation again, but it is just the reverse of what I have now. But I get out of the bottleneck, not that the Linksys could ever really touch real GB speeds anyway, but the managed switch certainly can. This is all because 13 statics are not that many. Right now, 6 "servers", the Linksys, a managed switch, a few SSL certs, and I am running out. I do not want to waste a static IP on the managed GB switch, or the Linksys, unless it provides me some type of benefit. Final question, under my current setup, if I am on a workstation, sitting at 192.168.1.109, the Linksys, with GB, and I send a file over ssh to the static IP machine, is that literally leaving the internet, and coming back in, or does it stay local? To me it seems like: Workstation (192.168.1.109) -> Linksys DHCP -> Linksys Static IP -> Cable Modem -> Server ( and it hits the 10/100 ports on the cable modem, slowing me down. But does it round trip the network, leave and come back in, limiting me to the 50/10 internet speeds? *These are all made up numbers, I do not use default router IP's as I will one day add a VPN, and do not want collisions. I need some recommendations, do I want one big network, or two isolated ones. Printers these days need an IP, everything does, I can not get autoconf/bonjour to be reliable on most printers. but I am also not sure I want the "server" side of my operation to be polluted by the workstation side of my operation. Unless there is some magic subetting I have not learned yet, here is what I am thinking: Cable modem 10/100, has 13 static IP, publicly accessible -> Enable DHCP on the cable modem -> Cable modem plugs into managed switch -> Managed switch gets 10.1.10.1 ssh, telnet, https admin management address -> Managed switch sends static IP's to to servers -> Plug Linksys into managed switch, giving it 10.1.10.2 static internally in Linksys admin -> Linksys gets assigned 10.1.10.x as its DHCP sending range -> Local printers, workstations, iPhones etc, connect to this -> ( Do I enable DHCP or disable it on the Linksys, just define a non over lapping range, or create an entirely new DHCP at 10.1.50.0/24, I think I am back isolated again with that method too? ) Thank you for any suggestions. This is the first time I have had to deal with less than a /24, and most are larger than that, but it is just a drop to a cabinet. Otherwise, it's a router, a few repeaters, and soho stuff that is simple, with one IP. I know a few may suggest going all DHCP on the servers, and I may one day, just not now, there has been too much moving of gear for me to be interested in that, and I would want something in the Catalyst series to deal with that.

    Read the article

  • An error which does not present itself with a debugger attached.

    - by ccook
    I am using Intel's FORTRAN compiler to compile a numerical library. The test case provided errors out within libc.so.6. When I attach Intel's debugger (IDB) the application runs through successfully. How do I debug a bug where the debugger prevents the bug? Note that the same bug arose with gfortran. I am working within OpenSUSE 11.2 x64. The error is: forrtl: severe (408): fort: (3): Subscript #1 of the array B has value -534829264 which is less than the lower bound of 1

    Read the article

  • Advantage of using a static member function instead of an equivalent non-static member function?

    - by jonathanasdf
    I was wondering whether there's any advantages to using a static member function when there is a non-static equivalent. Will it result in faster execution (because of not having to care about all of the member variables), or maybe less use of memory (because of not being included in all instances)? Basically, the function I'm looking at is an utility function to rotate an integer array representing pixel colours an arbitrary number of degrees around an arbitrary centre point. It is placed in my abstract Bullet base class, since only the bullets will be using it and I didn't want the overhead of calling it in some utility class. It's a bit too long and used in every single derived bullet class, making it probably not a good idea to inline. How would you suggest I define this function? As a static member function of Bullet, of a non-static member function of Bullet, or maybe not as a member of Bullet but defined outside of the class in Bullet.h? What are the advantages and disadvantages of each?

    Read the article

  • How much more productive are three monitors than two?

    - by Sir Graystar
    I am mulling over whether to buy a new monitor, to go along side my current setup of two 24 (ish) inch monitors. What I want to know is whether this is worth the money (probably around £200)? I think most of us will agree that two monitors is much more productive than one when programming and developing (Jeff Atwood has said this many times on his blog, and I imagine that most of you are fans of his), but is three much more productive than two? What I'm worried about is that I will have so much space that one monitor will be used for things that are not related to the task (music, facebook etc.) and it will actually make me less productive.

    Read the article

  • How to fetch message body and attachments in XML format using php/linux from Lotus Domino server?

    - by too
    Has anybody some information about accessing Lotus Domino server to fetch entire mail contents by http(s) requests from php linux server? The article by Andrei Kouvchinnikov describes well how to fetch message list in notes mail folders; after obtaining session id during login one can for example select top 100 messages by calling: https://your.server.domain/mail_db/mailbox.nsf/($Inbox)?ReadViewEntries&Start=1&Count=100 And this works perfectly. The problem arises when I am trying to get message contents (0A1DA5EEB7B65277C12576F50055D811 is an example message unique Id): https://your.server.domain/mail_db/mailbox.nsf/($Inbox)/0A1DA5EEB7B65277C12576F50055D811/?OpenDocument Such request in IE shows frameset with data hard to parse, in less common browsers like Opera it informs about unsupported browser. Ideally if it is possible to fetch notes message contents and all attachments by requesting it in the url, has anybody some information what request would it be? Link to Lotus web calls reference would be even more beneficial.

    Read the article

  • How the return type is determined in a method that uses Linq2Sql?

    - by Richard77
    Hello, I've usually have difficulties with return types when it comes to linq. I'll explain by the following examples. Let's say I have a Table Products with ProductID, Name, Category, and Price as columns : 1) IQueryable<Product public IQueryable<Product> GetChildrenProducts() { return (from pd in db.Products where pd.Category == "Children" select pd); } 2) Product public Product GetProduct(int id) { return (from pd in db.Products where pd.ProductID == id select pd).FirstOrDefault(); } Now, if I decide to select, for instance, only one column (Price or Name) or even 2 or 3 columns (Name and Price), but in any case, less than the 4 columns, what's going to be the return type? I mean this: public returnType GetSomeInformation() { return (from pd in db.Products select new { pd.Name, pd.Price } } What SHOULD BE the returnType for the GetSomeInformation()? Thanks for helping

    Read the article

  • Is this the 'Pythonic' way of doing things?

    - by Sergio Tapia
    This is my first effort on solving the exercise. I gotta say, I'm kind of liking Python. :D # D. verbing # Given a string, if its length is at least 3, # add 'ing' to its end. # Unless it already ends in 'ing', in which case # add 'ly' instead. # If the string length is less than 3, leave it unchanged. # Return the resulting string. def verbing(s): if len(s) >= 3: if s[-3:] == "ing": s += "ly" else: s += "ing" return s else: return s # +++your code here+++ return What do you think I could improve on here?

    Read the article

  • FFmpeg on iPhone

    - by gn-mithun
    Hello, I have downloaded ffmpeg libraries for iPhone and compiled them. My objective is to create a movie file from a series of images using ffmpeg libraries.The amount of documentation for ffmpeg on iphone is very less. I checked an app called iFrameExtractor, which does the opposite of what i want, it extracts frames from a video. On the command line there is a command called ffmpeg -f image2 -i image%d.jpg video.mov This turns a series of images into a video. I actually checked it on my mac and it works fine. What i wanted to know was how do we get the equivalent in iPhone. Or rather which class/api or method to call. There are a couple of examples of apps doing this on iPhone. Not sure whether they do it through ffmpeg though. Anyways, for reference "Time lapser" and "reel moments" Thanks in advance

    Read the article

  • my version of strlcpy

    - by robUK
    Hello, gcc 4.4.4 c89 My program does a lot of string coping. I don't want to use the strncpy as it doesn't nul terminate. And I can't use strlcpy as its not portable. Just a few questions. How can I put my function those its paces to ensure that it is completely safe and stable. Unit testing? Is this good enough for production? size_t s_strlcpy(char *dest, const char *src, const size_t len) { size_t i = 0; /* Always copy 1 less then the destination to make room for the nul */ for(i = 0; i < len - 1; i++) { /* only copy up to the first nul is reached */ if(*src != '\0') { *dest++ = *src++; } else { break; } } /* nul terminate the string */ *dest = '\0'; /* Return the number of bytes copied */ return i; } Many thanks for any suggestions,

    Read the article

  • Code reading: where can i read great, modern, and well-documented C++ code?

    - by baol
    Reading code is one of the best ways to learn new idioms, tricks, and techniques. Sadly it's very common to find badly written C++ code. Some use C++ as it was C, others as if it was Java, some just shoot in their feet. I believe gtkmm is a good example of C++ design, but a binding could not be the better code to read (you need to know the C library behind that). Boost libraries (at least the one I read) tend to be less readable than I'd like. Can you mention open source projects (or other projects which source is freely readable) that are good example of readable, modern, well-documented, and auto-contained, C++ code to learn from? (I believe that one project per answer will be better, and I'd include the motivation that led you to selecting that one.)

    Read the article

  • Steganography Experiment - Trouble hiding message bits in DCT coefficients

    - by JohnHankinson
    I have an application requiring me to be able to embed loss-less data into an image. As such I've been experimenting with steganography, specifically via modification of DCT coefficients as the method I select, apart from being loss-less must also be relatively resilient against format conversion, scaling/DSP etc. From the research I've done thus far this method seems to be the best candidate. I've seen a number of papers on the subject which all seem to neglect specific details (some neglect to mention modification of 0 coefficients, or modification of AC coefficient etc). After combining the findings and making a few modifications of my own which include: 1) Using a more quantized version of the DCT matrix to ensure we only modify coefficients that would still be present should the image be JPEG'ed further or processed (I'm using this in place of simply following a zig-zag pattern). 2) I'm modifying bit 4 instead of the LSB and then based on what the original bit value was adjusting the lower bits to minimize the difference. 3) I'm only modifying the blue channel as it should be the least visible. This process must modify the actual image and not the DCT values stored in file (like jsteg) as there is no guarantee the file will be a JPEG, it may also be opened and re-saved at a later stage in a different format. For added robustness I've included the message multiple times and use the bits that occur most often, I had considered using a QR code as the message data or simply applying the reed-solomon error correction, but for this simple application and given that the "message" in question is usually going to be between 10-32 bytes I have plenty of room to repeat it which should provide sufficient redundancy to recover the true bits. No matter what I do I don't seem to be able to recover the bits at the decode stage. I've tried including / excluding various checks (even if it degrades image quality for the time being). I've tried using fixed point vs. double arithmetic, moving the bit to encode, I suspect that the message bits are being lost during the IDCT back to image. Any thoughts or suggestions on how to get this working would be hugely appreciated. (PS I am aware that the actual DCT/IDCT could be optimized from it's naive On4 operation using row column algorithm, or an FDCT like AAN, but for now it just needs to work :) ) Reference Papers: http://www.lokminglui.com/dct.pdf http://arxiv.org/ftp/arxiv/papers/1006/1006.1186.pdf Code for the Encode/Decode process in C# below: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing.Imaging; using System.Drawing; namespace ImageKey { public class Encoder { public const int HIDE_BIT_POS = 3; // use bit position 4 (1 << 3). public const int HIDE_COUNT = 16; // Number of times to repeat the message to avoid error. // JPEG Standard Quantization Matrix. // (to get higher quality multiply by (100-quality)/50 .. // for lower than 50 multiply by 50/quality. Then round to integers and clip to ensure only positive integers. public static double[] Q = {16,11,10,16,24,40,51,61, 12,12,14,19,26,58,60,55, 14,13,16,24,40,57,69,56, 14,17,22,29,51,87,80,62, 18,22,37,56,68,109,103,77, 24,35,55,64,81,104,113,92, 49,64,78,87,103,121,120,101, 72,92,95,98,112,100,103,99}; // Maximum qauality quantization matrix (if all 1's doesn't modify coefficients at all). public static double[] Q2 = {1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1}; public static Bitmap Encode(Bitmap b, string key) { Bitmap response = new Bitmap(b.Width, b.Height, PixelFormat.Format32bppArgb); uint imgWidth = ((uint)b.Width) & ~((uint)7); // Maximum usable X resolution (divisible by 8). uint imgHeight = ((uint)b.Height) & ~((uint)7); // Maximum usable Y resolution (divisible by 8). // Start be transferring the unmodified image portions. // As we'll be using slightly less width/height for the encoding process we'll need the edges to be populated. for (int y = 0; y < b.Height; y++) for (int x = 0; x < b.Width; x++) { if( (x >= imgWidth && x < b.Width) || (y>=imgHeight && y < b.Height)) response.SetPixel(x, y, b.GetPixel(x, y)); } // Setup the counters and byte data for the message to encode. StringBuilder sb = new StringBuilder(); for(int i=0;i<HIDE_COUNT;i++) sb.Append(key); byte[] codeBytes = System.Text.Encoding.ASCII.GetBytes(sb.ToString()); int bitofs = 0; // Current bit position we've encoded too. int totalBits = (codeBytes.Length * 8); // Total number of bits to encode. for (int y = 0; y < imgHeight; y += 8) { for (int x = 0; x < imgWidth; x += 8) { int[] redData = GetRedChannelData(b, x, y); int[] greenData = GetGreenChannelData(b, x, y); int[] blueData = GetBlueChannelData(b, x, y); int[] newRedData; int[] newGreenData; int[] newBlueData; if (bitofs < totalBits) { double[] redDCT = DCT(ref redData); double[] greenDCT = DCT(ref greenData); double[] blueDCT = DCT(ref blueData); int[] redDCTI = Quantize(ref redDCT, ref Q2); int[] greenDCTI = Quantize(ref greenDCT, ref Q2); int[] blueDCTI = Quantize(ref blueDCT, ref Q2); int[] blueDCTC = Quantize(ref blueDCT, ref Q); HideBits(ref blueDCTI, ref blueDCTC, ref bitofs, ref totalBits, ref codeBytes); double[] redDCT2 = DeQuantize(ref redDCTI, ref Q2); double[] greenDCT2 = DeQuantize(ref greenDCTI, ref Q2); double[] blueDCT2 = DeQuantize(ref blueDCTI, ref Q2); newRedData = IDCT(ref redDCT2); newGreenData = IDCT(ref greenDCT2); newBlueData = IDCT(ref blueDCT2); } else { newRedData = redData; newGreenData = greenData; newBlueData = blueData; } MapToRGBRange(ref newRedData); MapToRGBRange(ref newGreenData); MapToRGBRange(ref newBlueData); for(int dy=0;dy<8;dy++) { for(int dx=0;dx<8;dx++) { int col = (0xff<<24) + (newRedData[dx+(dy*8)]<<16) + (newGreenData[dx+(dy*8)]<<8) + (newBlueData[dx+(dy*8)]); response.SetPixel(x+dx,y+dy,Color.FromArgb(col)); } } } } if (bitofs < totalBits) throw new Exception("Failed to encode data - insufficient cover image coefficients"); return (response); } public static void HideBits(ref int[] DCTMatrix, ref int[] CMatrix, ref int bitofs, ref int totalBits, ref byte[] codeBytes) { int tempValue = 0; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { if ( (u != 0 || v != 0) && CMatrix[v+(u*8)] != 0 && DCTMatrix[v+(u*8)] != 0) { if (bitofs < totalBits) { tempValue = DCTMatrix[v + (u * 8)]; int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); byte value = (byte)((codeBytes[bytePos] & mask) >> bitPos); // 0 or 1. if (value == 0) { int a = DCTMatrix[v + (u * 8)] & (1 << HIDE_BIT_POS); if (a != 0) DCTMatrix[v + (u * 8)] |= (1 << HIDE_BIT_POS) - 1; DCTMatrix[v + (u * 8)] &= ~(1 << HIDE_BIT_POS); } else if (value == 1) { int a = DCTMatrix[v + (u * 8)] & (1 << HIDE_BIT_POS); if (a == 0) DCTMatrix[v + (u * 8)] &= ~((1 << HIDE_BIT_POS) - 1); DCTMatrix[v + (u * 8)] |= (1 << HIDE_BIT_POS); } if (DCTMatrix[v + (u * 8)] != 0) bitofs++; else DCTMatrix[v + (u * 8)] = tempValue; } } } } } public static void MapToRGBRange(ref int[] data) { for(int i=0;i<data.Length;i++) { data[i] += 128; if(data[i] < 0) data[i] = 0; else if(data[i] > 255) data[i] = 255; } } public static int[] GetRedChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x,y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 16) & 0xff) - 128; } } return (data); } public static int[] GetGreenChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x, y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 8) & 0xff) - 128; } } return (data); } public static int[] GetBlueChannelData(Bitmap b, int sx, int sy) { int[] data = new int[8 * 8]; for (int y = sy; y < (sy + 8); y++) { for (int x = sx; x < (sx + 8); x++) { uint col = (uint)b.GetPixel(x, y).ToArgb(); data[(x - sx) + ((y - sy) * 8)] = (int)((col >> 0) & 0xff) - 128; } } return (data); } public static int[] Quantize(ref double[] DCTMatrix, ref double[] Q) { int[] DCTMatrixOut = new int[8*8]; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { DCTMatrixOut[v + (u * 8)] = (int)Math.Round(DCTMatrix[v + (u * 8)] / Q[v + (u * 8)]); } } return(DCTMatrixOut); } public static double[] DeQuantize(ref int[] DCTMatrix, ref double[] Q) { double[] DCTMatrixOut = new double[8*8]; for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { DCTMatrixOut[v + (u * 8)] = (double)DCTMatrix[v + (u * 8)] * Q[v + (u * 8)]; } } return(DCTMatrixOut); } public static double[] DCT(ref int[] data) { double[] DCTMatrix = new double[8 * 8]; for (int v = 0; v < 8; v++) { for (int u = 0; u < 8; u++) { double cu = 1; if (u == 0) cu = (1.0 / Math.Sqrt(2.0)); double cv = 1; if (v == 0) cv = (1.0 / Math.Sqrt(2.0)); double sum = 0.0; for (int y = 0; y < 8; y++) { for (int x = 0; x < 8; x++) { double s = data[x + (y * 8)]; double dctVal = Math.Cos((2 * y + 1) * v * Math.PI / 16) * Math.Cos((2 * x + 1) * u * Math.PI / 16); sum += s * dctVal; } } DCTMatrix[u + (v * 8)] = (0.25 * cu * cv * sum); } } return (DCTMatrix); } public static int[] IDCT(ref double[] DCTMatrix) { int[] Matrix = new int[8 * 8]; for (int y = 0; y < 8; y++) { for (int x = 0; x < 8; x++) { double sum = 0; for (int v = 0; v < 8; v++) { for (int u = 0; u < 8; u++) { double cu = 1; if (u == 0) cu = (1.0 / Math.Sqrt(2.0)); double cv = 1; if (v == 0) cv = (1.0 / Math.Sqrt(2.0)); double idctVal = (cu * cv) / 4.0 * Math.Cos((2 * y + 1) * v * Math.PI / 16) * Math.Cos((2 * x + 1) * u * Math.PI / 16); sum += (DCTMatrix[u + (v * 8)] * idctVal); } } Matrix[x + (y * 8)] = (int)Math.Round(sum); } } return (Matrix); } } public class Decoder { public static string Decode(Bitmap b, int expectedLength) { expectedLength *= Encoder.HIDE_COUNT; uint imgWidth = ((uint)b.Width) & ~((uint)7); // Maximum usable X resolution (divisible by 8). uint imgHeight = ((uint)b.Height) & ~((uint)7); // Maximum usable Y resolution (divisible by 8). // Setup the counters and byte data for the message to decode. byte[] codeBytes = new byte[expectedLength]; byte[] outBytes = new byte[expectedLength / Encoder.HIDE_COUNT]; int bitofs = 0; // Current bit position we've decoded too. int totalBits = (codeBytes.Length * 8); // Total number of bits to decode. for (int y = 0; y < imgHeight; y += 8) { for (int x = 0; x < imgWidth; x += 8) { int[] blueData = ImageKey.Encoder.GetBlueChannelData(b, x, y); double[] blueDCT = ImageKey.Encoder.DCT(ref blueData); int[] blueDCTI = ImageKey.Encoder.Quantize(ref blueDCT, ref Encoder.Q2); int[] blueDCTC = ImageKey.Encoder.Quantize(ref blueDCT, ref Encoder.Q); if (bitofs < totalBits) GetBits(ref blueDCTI, ref blueDCTC, ref bitofs, ref totalBits, ref codeBytes); } } bitofs = 0; for (int i = 0; i < (expectedLength / Encoder.HIDE_COUNT) * 8; i++) { int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); List<int> values = new List<int>(); int zeroCount = 0; int oneCount = 0; for (int j = 0; j < Encoder.HIDE_COUNT; j++) { int val = (codeBytes[bytePos + ((expectedLength / Encoder.HIDE_COUNT) * j)] & mask) >> bitPos; values.Add(val); if (val == 0) zeroCount++; else oneCount++; } if (oneCount >= zeroCount) outBytes[bytePos] |= mask; bitofs++; values.Clear(); } return (System.Text.Encoding.ASCII.GetString(outBytes)); } public static void GetBits(ref int[] DCTMatrix, ref int[] CMatrix, ref int bitofs, ref int totalBits, ref byte[] codeBytes) { for (int u = 0; u < 8; u++) { for (int v = 0; v < 8; v++) { if ((u != 0 || v != 0) && CMatrix[v + (u * 8)] != 0 && DCTMatrix[v + (u * 8)] != 0) { if (bitofs < totalBits) { int bytePos = (bitofs) >> 3; int bitPos = (bitofs) % 8; byte mask = (byte)(1 << bitPos); int value = DCTMatrix[v + (u * 8)] & (1 << Encoder.HIDE_BIT_POS); if (value != 0) codeBytes[bytePos] |= mask; bitofs++; } } } } } } } UPDATE: By switching to using a QR Code as the source message and swapping a pair of coefficients in each block instead of bit manipulation I've been able to get the message to survive the transform. However to get the message to come through without corruption I have to adjust both coefficients as well as swap them. For example swapping (3,4) and (4,3) in the DCT matrix and then respectively adding 8 and subtracting 8 as an arbitrary constant seems to work. This survives a re-JPEG'ing of 96 but any form of scaling/cropping destroys the message again. I was hoping that by operating on mid to low frequency values that the message would be preserved even under some light image manipulation.

    Read the article

  • Sync GIT and ClearCase

    - by Senthil A Kumar
    I am currently working on ClearCase and now migrating to GIT. But we need this migration in a way that all work will be done in GIT and the data will be synced backed to ClearCase stream. We will have the same branch names and stream names in both GIT and CC, so scripting shouldn't be a problem. The problem here is, Can someone suggest which is the best model to sync CC and GIT Have all the Vobs in CC as single repo in GIT, and have the major stream in CC as various branches in GIT. - Single GIT repo (VOBS) and many branches (CC streams). - This takes up less space as VOBs are kept as single repo with many branches. Have important CC branches as independent GIT repositories and each repository having all the CC VOBs. - Many GIT repo for many CC branch - This will take up lots of space as VOBs will be replicated across. Which do you think is the best way to keep it in sync with ClearCase

    Read the article

  • What is the proper way to resolve Eclipse warning "isn't parameterized"?

    - by Morinar
    I'm trying to clean up some warnings in some old Java code (in Eclipse), and I'm unsure what the proper thing to do is in this case. The block looks more or less like this: Transferable content = getToolkit().getSystemClipboard().getContents( null ); java.util.List clipboardFileList = null; if( content.isDataFlavorSupported( DataFlavor.javaFileListFlavor ) ) { try { clipboardFileList = (java.util.List)content.getTransferData( DataFlavor.javaFileListFlavor); } /* Do other crap, etc. */ } The List generates a warning as it isn't parameterized, however, if I parameterize it with <File>, which I'm pretty sure is what it requires, it complains that it can't convert from Object to List<File>. I could merely suppress the unchecked warning for the function, but would prefer to avoid that if there is a "good" solution. Thoughts?

    Read the article

  • Pythonic way of adding "ly" to end of string if it ends in "ing"?

    - by Sergio Tapia
    This is my first effort on solving the exercise. I gotta say, I'm kind of liking Python. :D # D. verbing # Given a string, if its length is at least 3, # add 'ing' to its end. # Unless it already ends in 'ing', in which case # add 'ly' instead. # If the string length is less than 3, leave it unchanged. # Return the resulting string. def verbing(s): if len(s) >= 3: if s[-3:] == "ing": s += "ly" else: s += "ing" return s else: return s # +++your code here+++ return What do you think I could improve on here?

    Read the article

  • Vbscript - Compare and copy files from folder if newer than destination files

    - by Kenny Bones
    Hi, I'm trying to design this script that's supposed to be used as a part of a logon script for alot of users. And this script is basically supposed to take a source folder and destination folder as basically just make sure that the destination folder has the exact same content as the source folder. But only copy if the datemodified stamp of the source file is newer than the destination file. I have been thinking out this basic pseudo code, just trying to make sure this is valid and solid basically. Dim strSourceFolder, strDestFolder strSourceFolder = "C:\Users\User\SourceFolder\" strDestFolder = "C:\Users\User\DestFolder\" For each file in StrSourceFolder ReplaceIfNewer (file, strDestFolder) Next Sub ReplaceIfNewer (SourceFile, DestFolder) Dim DateModifiedSourceFile, DateModifiedDestFile DateModifiedSourceFile = SourceFile.DateModified() DateModifiedDestFile = DestFolder & "\" & SourceFile.DateModified() If DateModifiedSourceFile < DateModifiedDestFile Copy SourceFile to SourceFolder End if End Sub Would this work? I'm not quite sure how it can be done, but I could probably spend all day figuring it out. But the people here are generally so amazingly smart that with your help it would take alot less time :)

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >