Search Results

Search found 30347 results on 1214 pages for 'jamie read'.

Page 120/1214 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Break a class in twain, or impose an interface for restricted access?

    - by bedwyr
    What's the best way of partitioning a class when its functionality needs to be externally accessed in different ways by different classes? Hopefully the following example will make the question clear :) I have a Java class which accesses a single location in a directory allowing external classes to perform read/write operations to it. Read operations return usage stats on the directory (e.g. available disk space, number of writes, etc.); write operations, obviously, allow external classes to write data to the disk. These methods always work on the same location, and receive their configuration (e.g. which directory to use, min disk space, etc.) from an external source (passed to the constructor). This class looks something like this: public class DiskHandler { public DiskHandler(String dir, int minSpace) { ... } public void writeToDisk(String contents, String filename) { int space = getAvailableSpace(); ... } public void getAvailableSpace() { ... } } There's quite a bit more going on, but this will do to suffice. This class needs to be accessed differently by two external classes. One class needs access to the read operations; the other needs access to both read and write operations. public class DiskWriter { DiskHandler diskHandler; public DiskWriter() { diskHandler = new DiskHandler(...); } public void doSomething() { diskHandler.writeToDisk(...); } } public class DiskReader { DiskHandler diskHandler; public DiskReader() { diskHandler = new DiskHandler(...); } public void doSomething() { int space = diskHandler.getAvailableSpace(...); } } At this point, both classes share the same class, but the class which should only read has access to the write methods. Solution 1 I could break this class into two. One class would handle read operations, and the other would handle writes: // NEW "UTILITY" CLASSES public class WriterUtil { private ReaderUtil diskReader; public WriterUtil(String dir, int minSpace) { ... diskReader = new ReaderUtil(dir, minSpace); } public void writeToDisk(String contents, String filename) { int = diskReader.getAvailableSpace(); ... } } public class ReaderUtil { public ReaderUtil(String dir, int minSpace) { ... } public void getAvailableSpace() { ... } } // MODIFIED EXTERNALLY-ACCESSING CLASSES public class DiskWriter { WriterUtil diskWriter; public DiskWriter() { diskWriter = new WriterUtil(...); } public void doSomething() { diskWriter.writeToDisk(...); } } public class DiskReader { ReaderUtil diskReader; public DiskReader() { diskReader = new ReaderUtil(...); } public void doSomething() { int space = diskReader.getAvailableSpace(...); } } This solution prevents classes from having access to methods they should not, but it also breaks encapsulation. The original DiskHandler class was completely self-contained and only needed config parameters via a single constructor. By breaking apart the functionality into read/write classes, they both are concerned with the directory and both need to be instantiated with their respective values. In essence, I don't really care to duplicate the concerns. Solution 2 I could implement an interface which only provisions read operations, and use this when a class only needs access to those methods. The interface might look something like this: public interface Readable { int getAvailableSpace(); } The Reader class would instantiate the object like this: Readable diskReader; public DiskReader() { diskReader = new DiskHandler(...); } This solution seems brittle, and prone to confusion in the future. It doesn't guarantee developers will use the correct interface in the future. Any changes to the implementation of the DiskHandler could also need to update the interface as well as the accessing classes. I like it better than the previous solution, but not by much. Frankly, neither of these solutions seems perfect, but I'm not sure if one should be preferred over the other. I really don't want to break the original class up, but I also don't know if the interface buys me much in the long run. Are there other solutions I'm missing?

    Read the article

  • Resumable upload from Java client to Grails web application?

    - by dersteps
    After almost 2 workdays of Googling and trying several different possibilities I found throughout the web, I'm asking this question here, hoping that I might finally get an answer. First of all, here's what I want to do: I'm developing a client and a server application with the purpose of exchanging a lot of large files between multiple clients on a single server. The client is developed in pure Java (JDK 1.6), while the web application is done in Grails (2.0.0). As the purpose of the client is to allow users to exchange a lot of large files (usually about 2GB each), I have to implement it in a way, so that the uploads are resumable, i.e. the users are able to stop and resume uploads at any time. Here's what I did so far: I actually managed to do what I wanted to do and stream large files to the server while still being able to pause and resume uploads using raw sockets. I would send a regular request to the server (using Apache's HttpClient library) to get the server to send me a port that was free for me to use, then open a ServerSocket on the server and connect to that particular socket from the client. Here's the problem with that: Actually, there are at least two problems with that: I open those ports myself, so I have to manage open and used ports myself. This is quite error-prone. I actually circumvent Grails' ability to manage a huge amount of (concurrent) connections. Finally, here's what I'm supposed to do now and the problem: As the problems I mentioned above are unacceptable, I am now supposed to use Java's URLConnection/HttpURLConnection classes, while still sticking to Grails. Connecting to the server and sending simple requests is no problem at all, everything worked fine. The problems started when I tried to use the streams (the connection's OutputStream in the client and the request's InputStream in the server). Opening the client's OutputStream and writing data to it is as easy as it gets. But reading from the request's InputStream seems impossible to me, as that stream is always empty, as it seems. Example Code Here's an example of the server side (Groovy controller): def test() { InputStream inStream = request.inputStream if(inStream != null) { int read = 0; byte[] buffer = new byte[4096]; long total = 0; println "Start reading" while((read = inStream.read(buffer)) != -1) { println "Read " + read + " bytes from input stream buffer" //<-- this is NEVER called } println "Reading finished" println "Read a total of " + total + " bytes" // <-- 'total' will always be 0 (zero) } else { println "Input Stream is null" // <-- This is NEVER called } } This is what I did on the client side (Java class): public void connect() { final URL url = new URL("myserveraddress"); final byte[] message = "someMessage".getBytes(); // Any byte[] - will be a file one day HttpURLConnection connection = url.openConnection(); connection.setRequestMethod("GET"); // other methods - same result // Write message DataOutputStream out = new DataOutputStream(connection.getOutputStream()); out.writeBytes(message); out.flush(); out.close(); // Actually connect connection.connect(); // is this placed correctly? // Get response BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream())); String line = null; while((line = in.readLine()) != null) { System.out.println(line); // Prints the whole server response as expected } in.close(); } As I mentioned, the problem is that request.inputStream always yields an empty InputStream, so I am never able to read anything from it (of course). But as that is exactly what I'm trying to do (so I can stream the file to be uploaded to the server, read from the InputStream and save it to a file), this is rather disappointing. I tried different HTTP methods, different data payloads, and also rearranged the code over and over again, but did not seem to be able to solve the problem. What I hope to find I hope to find a solution to my problem, of course. Anything is highly appreciated: hints, code snippets, library suggestions and so on. Maybe I'm even having it all wrong and need to go in a totally different direction. So, how can I implement resumable file uploads for rather large (binary) files from a Java client to a Grails web application without manually opening ports on the server side?

    Read the article

  • Delphi: EReadError with message ‘Property PageNr does Not Exist’.

    - by lyborko
    Hi, I get SOMETIMES error message: EReadError with message 'Property PageNr does Not exist', when I try to run my own project. I am really desperate, because I see simply nothing what is the cause. The devilish is that it comes up sometimes but often. It concerns of my own component TPage. Here is declaration TPage = class(TCustomControl) // private FPaperHeight, FPaperWidth:Integer; FPaperBrush:TBrush; FPaperSize:TPaperSize; FPaperOrientation:TPaperOrientation; FPDFDocument: TPDFDocument; FPageNr:integer; procedure PaintBasicLayout; procedure PaintInterior; procedure SetPapersize(Value: TPapersize); procedure SetPaperHeight(Value: Integer); procedure SetPaperWidth(Value: Integer); procedure SetPaperOrientation(value:TPaperOrientation); procedure SetPaperBrush(Value:TBrush); procedure SetPageNr(Value:Integer); protected procedure CreateParams(var Params:TCreateParams); override; procedure AdjustClientRect(var Rect: TRect); override; public constructor Create(AOwner: TComponent);override; destructor Destroy;override; // function GetChildOwner:TComponent; override; procedure DrawControl(X,Y :integer; Dx,Dy:Double; Ctrl:TControl;NewCanvas:TCanvas); // procedure GetChildren(Proc:TGetChildProc; Root:TComponent); override; procedure Loaded; override; procedure MouseDown(Button: TMouseButton; Shift: TShiftState; X, Y: Integer); override; procedure Paint; override; procedure PrintOnCanvas(X,Y:integer; rX,rY:Double; ACanvas:TCanvas); procedure PrintOnPDFCanvas(X,Y:integer); procedure PrintOnPrinterCanvas(X,Y:integer); procedure Resize; override; procedure SetPrintKind(APrintKind:TPrintKind; APrintGroupindex:Integer); published property PageNr:integer read FPageNr write SetPageNr; property PaperBrush: TBrush read FPaperBrush write SetPaperBrush; property PaperHeight: integer read FPaperHeight write SetPaperHeight; property PaperWidth: integer read FPaperWidth write SetPaperWidth; property PaperSize: TPaperSize read FPaperSize write SetPaperSize; property PaperOrientation:TPaperOrientation read FPaperOrientation write SetPaperOrientation; property PDFDocument:TPDFDocument read FPDFDocument write FPDFDocument; property TabOrder; end; I thoroughly read the similar topic depicted here: Delphi: EReadError with message 'Property Persistence does Not exist' But here it is my own source code. No third party. Interesting: when I delete PageNr property in my dfm file (unit1.dfm), then pops up : EReadError with message 'Property PaperHeight does Not exist'. when I delete PaperHeight then it will claim PaperWidth and so on... Here is piece of dfm file: object pg1: TPage Left = 128 Top = 144 Width = 798 Height = 1127 PageNr = 0 PaperHeight = 1123 PaperWidth = 794 PaperSize = psA4 PaperOrientation = poPortrait TabOrder = 0 object bscshp4: TBasicShape Left = 112 Top = 64 Width = 105 Height = 105 PrintKind = pkNormal PrintGroupIndex = 0 Zooming = 100 Transparent = False Repeating = False PageRepeatOffset = 1 ShapeStyle = ssVertical LinePosition = 2 end object bscshp5: TBasicShape Left = 288 Top = 24 Width = 105 Height = 105 PrintKind = pkNormal PrintGroupIndex = 0 Zooming = 100 Transparent = False What the hell happens ??????? I have never seen that. I compiled the unit several times... Encoutered no problem. Maybe the cause is beyond this. I feel completely powerless.

    Read the article

  • Siebel Troubleshooting : An ODBC error occurred; SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl

    - by Giri Mandalika
    Symptom: A newly installed Siebel application server fails to start despite successful ODBC connectivity to the database. SRProc process logs ODBC error messages similar to the following: Message: GEN-13, Additional Message: dict-ERR-1109: Unable to read value from export file (Data length (32) Column definition (3)). Message: GEN-13, Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 4 ). GenericLog GenericError 1 0002157.. 11-11-18 13:28 Message: Generated SQL statement:, Additional Message: SQLFetch: SELECT RDOBJ.DOCK_ID, RDOBJ.RELATED_DOCK_ID, RDOBJ.SQL_STATEMENT, RDOBJ.CHECK_VISIBILITY, 'N', RDOBJ.COMMENTS, RDOBJ.ACTIVE, RDOBJ.SEQUENCE, RDOBJ.VIS_STRENGTH, RDOBJ.REL_VIS_STRENGTH, RDOBJ.VIS_EVT_COLS FROM ORAPERF.S_DOCK_REL_DOBJ RDOBJ, ORAPERF.S_DOCK_OBJECT DOBJ WHERE RDOBJ.REPOSITORY_ID = (SELECT ROW_ID FROM ORAPERF.S_REPOSITORY WHERE NAME = ?) AND DOBJ.ROW_ID = RDOBJ.DOCK_ID AND (DOBJ.INACTIVE_FLG = 'N' OR DOBJ.INACTIVE_FLG IS NULL) AND (RDOBJ.INACTIVE_FLG = 'N' OR RDOBJ.INACTIVE_FLG IS NULL) Message: Error: An ODBC error occurred, Additional Message: Function: DICGetRDObjects; ODBC operation: SQLFetch Message: GEN-13, Additional Message: dict-ERR-1109: Unable to read value from export file (UTLCompressFRead (fseek)). Message: GEN-13, Additional Message: dict-ERR-1107: Unable to read row 0 from export file (UTLDataValRead pBuf, col 0 ). Message: GEN-10, Additional Message: Calling Function: DICLoadDObjectInfo; Called Function: Calling DICGetRDObjects Message: GEN-10, Additional Message: Calling Function: DICLoadDict; Called Function: DICLoadDObjectInfo GenericError (srpdb.cpp (860) err=3006 sys=2) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (srpsmech.cpp (74) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (srpmtsrv.cpp (107) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl (smimtsrv.cpp (1203) err=3006 sys=0) SBL-GEN-03006: Error calling function: DICFindTable m_pReqTbl SmiLayerLog Error Terminate process due to unrecoverable error: 3006. (Main Thread) An inconsistent or corrupted dictionary file "diccache.dat" is likely the cause. Solution: Stop the application server and manually kill the remaining Siebel application specific processes eg., stop_server all pkill siebmtsh pkill siebproc .. Remove $SIEBEL_HOME/bin/diccache.dat file. It will be re-generated during the application server startup Start the application server start_server all

    Read the article

  • NTFS and NTFS-3G

    - by MestreLion
    I have a netbook with Ubuntu Netbook 10.04 and a desktop with Mint 10 (~ Ubuntu Desktop 10.10) Both of them have read/write NTFS partitions mounted via /etc/fstab. And it works fine. Ive read on net, google, forums, and several posts here, that NTFS-3G is the driver that allows you to have full access to an NTFS partition, that it is new, great, powerful, yada-yada. But... my entries use plain ntfs, no mention of -3g, and they still work perfectly as read and write. Am I already using ntfs-3g? Does 10.04 onwards use it "under the hood"? How can i check that in my system? Should my /etc/fstab entries should use "ntfs-3g" as the fs? Why some posts refer to mount ntfs, while others say mount ntfs-3g ? Im really confused about where should I use fs-type names (ntfs) or driver names (ntfs-3g?). Or is it irrelevant now, and ntfs is always an "alias" or something for ntfs-3g nowadays? Ive read some posts her, from Oct-10 and Nov-10 e that "announce" that ntfs-3g "finally arrived"... thats way post-Lucid 10.04. Could someone please undo this mess in my head, and explain the relation between ntfs and ntfs-3g, what is the current status (10.04 and 10.10), where should i use each, etc (regarding mount, fstab, etc)? Sorry for the long, confusing, redudant text... im really getting sleepy

    Read the article

  • API Auth vs User Auth

    - by user1626384
    I have read many posts and articles on this topic but still cant connect the dots. I want to make a Rails app that is strictly a JSON API maybe using Sinatra or the rails-api gem. I also want to make both a web client app and an iPhone app which consumes the API. No plans on letting third party dev's use it. So I could create a separate username/password combination for both the web and mobile client and use HTTP Basic over SSL. Each app would have these values as configs in the source and use it to authenticate to the API so only these can make a call. Anyone else trying would get a 401 error returned. This would be considered handling the API authentication. The web and mobile client apps allow end users to sign up and read/write data to the API. When each user is created, I create and save a token in their profile. If a user successfully signs in, I send back the token. On each future read/write then also send along this token in the header. I get the token and lookup the user in the database and make the read/write. Does this sound like an appropriate way to handle it. For the web client, when I initially send back the token, where do I store it. In a cookie? Do I also drop a cookie to handle session state?

    Read the article

  • C# 4.0 in a Nutshell, Fourth Edition

    - by outcoldman
    Just became a lucky owner of this book C# IN A NUTSHELL 4th edition. This is a fourth edition of this book’s series. I saw previous third edition of this book, we presented it on one of our events at Yaroslavl State University, but that book was a Russian translated version and published in Russia, this is was bad side of that book – all books at Russia printed on really bad paper. I should say that I didn’t read this book by end, but already I was surprised. Why? Why I heard a lot about Richter CLR via C# (English version of 3rd edition of this book I already have, and this book are waiting my attention), and just a few words about C# IN A NUTSHELL, at least in my sphere. I just listen once about this book at one of the podcast of Alt.Net group, and this words was Richter it is really good book, and C# IN A NUTSHELL it is a good handbook. My opinion is - you should read Richter if you want to develop with .NET. But if you want to develop on .NET with C# you should read C# IN A NUTSHELL too. Read more...

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • Reading from a staging 2D texture array in DirectX10

    - by Don Reba
    I have a DX10 program, where I create an array of 3 16x16 textures, then map, read, and unmap each subresource in turn. I use a single mip level, set resource usage to staging and CPU access to read. Now, here is the problem: Subresource 0 contains 1024 bytes, pitch 64, as expected. Subresource 1 contains 512 bytes, pitch 64. Subresource 2 contains 256 bytes, pitch 64. I expect all three to be the same size. Debugging output is enabled, but not reporting any warnings or errors. Am I missing something, or might this be some sort of driver issue? Here is the code. The language is Nemerle, but C# and C++ would look almost the same. I have looked through the generated code, and am fairly confident the problem is not language-related. def cpuTexture = Texture2D ( device , Texture2DDescription() <- { Width = 16; Height = 16; MipLevels = 1; ArraySize = 3; Format = Format.R32_Float; Usage = ResourceUsage.Staging; CpuAccessFlags = CpuAccessFlags.Read; SampleDescription = SampleDescription(count = 1, quality = 0); } ); foreach (subresource in [0 .. 2]) { def data = cpuTexture.Map(subresource, MapMode.Read, MapFlags.None); Console.WriteLine($"subresource $subresource"); Console.WriteLine($"length = $(data.Data.Length)"); Console.WriteLine($"pitch = $(data.Pitch)"); cpuTexture.Unmap(subresource); }

    Read the article

  • Fix corrupt NTFS partition without Windows

    - by Capt.Nemo
    MY NTFS Partition has gotten corrupt somehow (it's a relic from the days when I had Windows installed). I'm putting the debug output of fdisk and blkid here. At the same time, any OS is unable to mount my root partition, which is located next to my NTFS partition. I'm not sure if this has anything to do with it, though. I get the following error while trying to mount my root partition (sda5) mount: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so ubuntu@ubuntu:~$ dmesg | tail [ 1019.726530] Descriptor sense data with sense descriptors (in hex): [ 1019.726533] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [ 1019.726551] 1a 3e ed 92 [ 1019.726558] sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed [ 1019.726568] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 1a 3e ed 40 00 01 00 00 [ 1019.726584] end_request: I/O error, dev sda, sector 440331666 [ 1019.726602] JBD: Failed to read block at offset 462 [ 1019.726609] ata1: EH complete [ 1019.726612] JBD: recovery failed [ 1019.726617] EXT4-fs (sda5): error loading journal When I open gparted (using live CD), I get an exclamation next to my NTFS drive which states Is there a way to run chkdsk without using windows ? My attempt to run fsck results in the following : ubuntu@ubuntu:~$ sudo fsck /dev/sda fsck from util-linux-ng 2.17.2 e2fsck 1.41.14 (22-Dec-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Update : I was able to fix the NTFS partition running chkdsk off HBCD, but it seems that the superblock problem still remains. *Update 2: * Fixed superblock issue using e2fsck -c /dev/sda5

    Read the article

  • Additional new material WebLogic Community

    - by JuergenKress
    Update: Commercially Supported GlassFish VersionsAquarium blogger David Delabassee shares background information and links to where you can download the recently released GlassFish Server Bundle Patch 3.1.2.8. Read the article. Announcing WebLogic on Oracle Database Appliance 2.7Oracle WebLogic Server on Oracle Database Appliance 2.7 offers a complete solution for building and deploying enterprise Java EE applications in a fully integrated system of software, servers, storage, and networking that delivers highly available database and WebLogic services. Learn more. APAC Partner iDay: What's New in Oracle WebLogic, 8-Apr 12 noon SG/2pm AEDT/9:30 IST - Invite your Partners - Register Virtual Developer Conference:  Creating a Foundation for Cloud Applications using Oracle WebLogic and Oracle Coherence - OnDemand Webcast: WebLogic Configuration using Chef and Puppet - On-Demand Podcast Series: Part 3 - Oracle WebLogic Server and Oracle Database Integration - Podcast Coherence*Web: Sharing an httpSession Among Applications in Different Oracle WebLogic Clusters SOA solution architect Jordi Villena shows how easy it is to extend Coherence*Web to enable session sharing. Read the article. Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. Read the article. Video: Coherence Community on Java.net - 4 Projects available under CDDL-1.0 Brian Oliver (Senior Principal Solutions Architect, Oracle Coherence) and Randy Stafford (Architect At-Large, Oracle Coherence Product Development) discuss the evolution of the Oracle Coherence Community on Java.net and how you can actively participate in open source Coherence Community projects. Watch the video. Working with Oracle Security Token Service in an Architecture Involving Oracle WebLogic Server and Oracle Service Bus Oracle Fusion Middleware specialist Ronaldo Fernandes takes you step by step through the process of creating a single sign-on between Oracle WebLogic and Oracle Service Bus using Oracle Security Token Service (OSTS) to generate SAML tokens. Read the article. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,

    Read the article

  • software architecture (OO design) refresher course

    - by PeterT
    I am lead developer and team lead in a small RAD team. Deadlines are tight and we have to release often, which we do, and this is what keep the business happy. While we (the development team) are trying to maintain the quality of the code (clean and short methods), I can't help but notice that the overall quality of the OO design&architecture is getting worse over the time - the library we are working on is gradually reducing itself to a "bag of functions". Well, we try to use the design patterns, but since we don't really have much time for a design as such we are mostly using the creational ones. I have read Code Complete / Design Patterns (GOF & enterprise) / Progmatic Programmer / and many books from Effective XXX series. Should I re-read them again as I have read them a long time ago and forgotten quite a lot, or there are other / better OO design / software architeture books been published since then which I should definitely read? Any ideas, recommendations on how can I get the situation under control and start improving the architecture. The way I see it - I will start improving the architectural / design quality of software components I am working on and then will start helping other team members once I find what is working for me.

    Read the article

  • SQL – Download FREE Book – Data Access for HighlyScalable Solutions: Using SQL, NoSQL, and Polyglot Persistence

    - by Pinal Dave
    Recently I was preparing for Big Data and I ended up on very interesting read for everybody. This is created by Microsoft and it is indeed a fantastic read as per my opinion. It took me some time to read this entire book but it was worth reading this as it tried to answer two of the very interesting questions related to muscle. Here is the abstract from the book: Organizations seeking to use a NoSQL database are therefore faced with a twofold challenge: • Which NoSQL database(s) best meet(s) the needs of the organization? • How does an organization integrate a NoSQL database into its solutions? As I keep on reading the book, I find it very interesting and informative. I suggest if you have time this weekend, download the book and read it. This guide focuses on the most common types of NoSQL database currently available, describes the situations for which they are most suited, and shows examples of how you might incorporate them into a business application. The guide summarizes the experiences of a fictitious organization named Adventure Works, who implemented a solution that comprised an assortment of different databases. Download Data Access for HighlyScalable Solutions:  Using SQL, NoSQL,  and Polyglot Persistence While we are talking about Big Data and NoSQL do not forget to check out my tomorrow’s blog as I am going to talk about the same subject and it will be very interesting. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, NoSQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Update Manager won't open (error related to pythonverbose)

    - by Mateus Machado Luna
    I'm having an issue with update-manager. Last night, my computer restart suddenly during the update process. Now it won't open and it keep appearing on the notifier with a message warning that an error occurred. The error is the same that is displayed when I try to open it on the terminal: Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected Traceback (most recent call last): File "/usr/bin/update-manager", line 26, in <module> from __future__ import print_function EOFError: EOF read where not expected I've already seen some questions here, but most of them are related to problems with ppas and the source.list file. This seems to be a bug on update-manager itself. I've already tried to remove it and install again, but the problem persists. I also noted another bug: my source-center doesn't open too. The message for it is similar to the other one: Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected Traceback (most recent call last): File "/usr/lib/command-not-found", line 5, in <module> from __future__ import absolute_import, print_function EOFError: EOF read where not expected For now I'm using apt-get update && upgrade for updating and the Synaptic for the source management. But I really would like to fix this stuff. Anyone can help? I'm with Ubuntu 12.10, Gnome-remix, 64-bits.

    Read the article

  • Threading models when talking to hardware devices

    - by Fuzz
    When writing an interface to hardware over a communication bus, communications timing can sometimes be critical to the operation of a device. As such, it is common for developers to spin up new threads to handle communications. It can also be a terrible idea to have a whole bunch of threads in your system, an in the case that you have multiple hardware devices you may have many many threads that are out of control of the main application. Certainly it can be common to have two threads per device, one for reading and one for writing. I am trying to determine the pros and cons of the two different models I can think of, and would love the help of the Programmers community. Each device instance gets handles it's own threads (or shares a thread for a communication device). A thread may exist for writing, and one for reading. Requested writes to a device from the API are buffered and worked on by the writer thread. The read thread exists in the case of blocking communications, and uses call backs to pass read data to the application. Timing of communications can be handled by the communications thread. Devices aren't given their own threads. Instead read and write requests are queued/buffered. The application then calls a "DoWork" function on the interface and allows all read and writes to take place and fire their callbacks. Timing is handled by the application, and the driver can request to be called at a given specific frequency. Pros for Item 1 include finer grain control of timing at the communication level at the expense of having control of whats going on at the higher level application level (which for a real time system, can be terrible). Pros for Item 2 include better control over the timing of the entire system for the application, at the expense of allowing each driver to handle it's own business. If anyone has experience with these scenarios, I'd love to hear some ideas on the approaches used.

    Read the article

  • How do I mount a CIFS share via FSTAB and give full RW to Guest

    - by Kendor
    I want to create a Public folder that has full RW access. The problem with my configuration is that Windows users have no issues as guests (they can RW and Delete), my Ubuntu client can't do the same. We can only write and read, but not create or delete. Here is the my smb.conf from my server: [global] workgroup = WORKGROUP netbios name = FILESERVER server string = TurnKey FileServer os level = 20 security = user map to guest = Bad Password passdb backend = tdbsam null passwords = yes admin users = root encrypt passwords = true obey pam restrictions = yes pam password change = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . add user script = /usr/sbin/useradd -m '%u' -g users -G users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' guest account = nobody syslog = 0 log file = /var/log/samba/samba.log max log size = 1000 wins support = yes dns proxy = no socket options = TCP_NODELAY panic action = /usr/share/samba/panic-action %d [homes] comment = Home Directory browseable = no read only = no valid users = %S [storage] create mask = 0777 directory mask = 0777 browseable = yes comment = Public Share writeable = yes public = yes path = /srv/storage The following FSTAB entry doesn't yield full R/W access to the share. //192.168.0.5/storage /media/myname/TK-Public/ cifs rw 0 0 This doesn't work either //192.168.0.5/storage /media/myname/TK-Public/ cifs rw,guest,iocharset=utf8,file_mode=0777,dir_mode=0777,noperm 0 0 Using the following location in Nemo/Nautilus w/o the Share being mounted does work: smb://192.168.0.5/storage/ Extra info. I just noticed that if I copy a file to the share after mounting, my Ubuntu client immediately make "nobody" be the owner, and the group "no group" has read and write, with everyone else as read-only. What am I doing wrong?

    Read the article

  • Web Crawler for Learnign Topics on Wikipedia

    - by Chris Okyen
    When I want to learn a vast topic on wikipedia, I don't know where to start. For instance say I want to learn about Binary Stars, I then have to know other things linked on that pages and linked pages on all the linked pages and so on for the specified number of levels. I want to write a web crawler like HTTracker or something similiar, that will display a heiarchy of the links on a certain page and the links on those linked pages.I wish to use as much prewritten code as possible. Here is an example: Pretending we are bending the rules by grabing links from only the first sentence of each pages The example archives and "processes" two levels deep The page is Ternary operation The First Level In mathematics a ternary operation is an N-ary operation The Second Level Under Mathmatics: Mathematics (from Greek µ???µa máthema, “knowledge, study, learning”) is the abstract study of topics encompassing quantity, structure, space, change and others; it has no generally accepted definition. Under N-ary In logic,mathematics, and computer science, the arity i/'ær?ti/ of a function or operation is the number of arguments or operands that the function takes Under Operation In its simplest meaning in mathematics and logic, an operation is an action or procedure which produces a new value from one or more input values ------------------------------------------------------------------------- I need some way to determine what oder to approach all these wiki pages to learn the concept ( in this case ternary operations )... Following along with this exmpakle, one way to show the path to read would a printout flowout like so: This shows that the first sentence of the Mathematics page doesn't link to the first sentence of pages linked on ternary page two levels deep. (Please tell me how I should explain this ) --- In otherwords, the child node of the top pages first sentence, ternary_operation, does not have any child nodes that reference the children of the top pages other children nodes- N-ary and operation. Thus it is safe to read this first. Since N-ary has a link to operations we shoudl read the operation page second and finally read the N-ary page last. Again, I wish to use as much prewritten code as possible, and was wondering what language to use and what would be the simpliest way to go about doing this if there isn't already somethign out there? Thank You!

    Read the article

  • external hard drive not detected after ubuntu crash [restarting machine]

    - by Netmoon
    today i tried to watch movie [with vlc media player] from my external hard drive [expansion portable 500 GB], and it's play good, then pause the movie and play it again, and my Ubuntu crashed! [Ubuntu 12.04]i had to restart the machine, so did it, but after that Ubuntu can't recognize this hard drive! i changed USB cable but it was not effective. this is my dmesg command result : [ 191.281630] usb 2-1.3: new full-speed USB device number 9 using ehci_hcd [ 191.353527] usb 2-1.3: device descriptor read/64, error -32 [ 191.529115] usb 2-1.3: device descriptor read/64, error -32 [ 191.704669] usb 2-1.3: new full-speed USB device number 10 using ehci_hcd [ 191.776524] usb 2-1.3: device descriptor read/64, error -32 [ 191.952202] usb 2-1.3: device descriptor read/64, error -32 [ 192.127772] usb 2-1.3: new full-speed USB device number 11 using ehci_hcd [ 192.534742] usb 2-1.3: device not accepting address 11, error -32 [ 192.606749] usb 2-1.3: new full-speed USB device number 12 using ehci_hcd [ 193.013696] usb 2-1.3: device not accepting address 12, error -32 [ 193.013906] hub 2-1:1.0: unable to enumerate USB device on port 3 note: I am able to hear the sound of the external drive lens. When attach hard drive to usb port, the status light goes on, but it's low bright and i think its power is low! try to mount in Microsoft Windows 7, but nothing happened.

    Read the article

  • HttpURLConnection! Connection.getInputStream is java.io.FileNotFoundException

    - by user3643283
    I created a method "UPLPAD2" to upload file to server. Splitting my file to packets(10MB). It's OK (100%). But when i call getInputStream, i get FileNotFoundException. I think, in loop, i make new HttpURLConnection to set "setRequestProperty". This is a problem. Here's my code: @SuppressLint("NewApi") public int upload2(URL url, String filePath, OnProgressUpdate progressCallBack, AtomicInteger cancelHandle) throws IOException { HttpURLConnection connection = null; InputStream fileStream = null; OutputStream out = null; InputStream in = null; HttpResponse response = new HttpResponse(); Log.e("Upload_Url_Util", url.getFile()); Log.e("Upload_FilePath_Util", filePath); long total = 0; try { // Write the request. // Read from filePath and upload to server (url) byte[] buf = new byte[1024]; fileStream = new FileInputStream(filePath); long lenghtOfFile = (new java.io.File(filePath)).length(); Log.e("LENGHT_Of_File", lenghtOfFile + ""); int totalPacket = 5 * 1024 * 1024; // 10 MB int totalChunk = (int) ((lenghtOfFile + (totalPacket - 1)) / totalPacket); String headerValue = ""; String contentLenght = ""; for (int i = 0; i < totalChunk; i++) { long from = i * totalPacket; long to = 0; if ((from + totalPacket) > lenghtOfFile) { to = lenghtOfFile; } else { to = (totalPacket * (i + 1)); } to = to - 1; headerValue = "bytes " + from + "-" + to + "/" + lenghtOfFile; contentLenght = "Content-Length:" + (to - from + 1); Log.e("Conten_LENGHT", contentLenght); connection = client.open(url); connection.setRequestMethod("POST"); connection.setRequestProperty("Content-Range", headerValue); connection.setRequestProperty("Content-Length", Long.toString(to - from + 1)); out = connection.getOutputStream(); Log.e("Lenght_Of_File", lenghtOfFile + ""); Log.e("Total_Packet", totalPacket + ""); Log.e("Total_Chunk", totalChunk + ""); Log.e("Header_Valure", headerValue); int read = 1; while (read > 0 && cancelHandle.intValue() == 0 && total < totalPacket * (i + 1)) { read = fileStream.read(buf); if (read > 0) { out.write(buf, 0, read); total += read; progressCallBack .onProgressUpdate((int) ((total * 100) / lenghtOfFile)); } } Log.e("TOTAL_", total + "------" + totalPacket * (i + 1)); Log.e("I_", i + ""); Log.e("LENGHT_Of_File", lenghtOfFile + ""); if (i < totalChunk - 1) { connection.disconnect(); } out.close(); } // Read the response. response.setHttpCode(connection.getResponseCode()); in = connection.getInputStream(); // I GET ERROR HERE. if (connection.getResponseCode() != HttpURLConnection.HTTP_OK) { throw new IOException("Unexpected HTTP response: " + connection.getResponseCode() + " " + connection.getResponseMessage()); } byte[] body = readFully(in); response.setBody(body); response.setHeaderFields(connection.getHeaderFields()); if (cancelHandle.intValue() != 0) { return 1; } JSONObject jo = new JSONObject(response.getBodyAsString()); Log.e("Upload_Body_res_", response.getBodyAsString()); if (jo.has("error")) { if (jo.has("code")) { int errCode = jo.getInt("code"); Log.e("Upload_Had_errcode", errCode + ""); return errCode; } else { return 504; } } Log.e("RESPONE_BODY_UPLOAD", response.getBodyAsString() + ""); return 0; } catch (Exception e) { e.printStackTrace(); Log.e("Http_UpLoad_Response_Exception", e.toString()); response.setHttpCode(connection.getResponseCode()); Log.e("ErrorCode_Upload_Util_Return", response.getHttpCode() + ""); if (connection.getResponseCode() == 200) { return 1; } else if (connection.getResponseCode() == 0) { return 1; } else { return response.getHttpCode(); } // Log.e("ErrorCode_Upload_Util_Return", response.getHttpCode()+""); } finally { if (fileStream != null) fileStream.close(); if (out != null) out.close(); if (in != null) in.close(); } } And Logcat 06-12 09:39:29.558: W/System.err(30740): java.io.FileNotFoundException: http://download-f77c.fshare.vn/upload/NRHAwh+bUCxjUtcD4cn9xqkADpdL32AT9pZm7zaboHLwJHLxOPxUX9CQxOeBRgelkjeNM5XcK11M1V-x 06-12 09:39:29.558: W/System.err(30740): at com.squareup.okhttp.internal.http.HttpURLConnectionImpl.getInputStream(HttpURLConnectionImpl.java:187) 06-12 09:39:29.563: W/System.err(30740): at com.fsharemobile.client.HttpUtil.upload2(HttpUtil.java:383) 06-12 09:39:29.563: W/System.err(30740): at com.fsharemobile.fragments.ExplorerFragment$7$1.run(ExplorerFragment.java:992) 06-12 09:39:29.568: W/System.err(30740): at java.lang.Thread.run(Thread.java:856) 06-12 09:39:29.568: E/Http_UpLoad_Response_Exception(30740): java.io.FileNotFoundException: http://download-f77c.fshare.vn/upload/NRHAwh+bUCxjUtcD4cn9xqkADpdL32AT9pZm7zaboHLwJHLxOPxUX9CQxOeBRgelkjeNM5XcK11M1V-x

    Read the article

  • OpenGL texture shifted somewhat to the left when applied to a quad

    - by user308226
    I'm a bit new to OpenGL and I've been having a problem with using textures. The texture seems to load fine, but when I run the program, the texture displays shifted a couple pixels to the left, with the section cut off by the shift appearing on the right side. I don't know if the problem here is in the my TGA loader or if it's the way I'm applying the texture to the quad. Here is the loader: #include "texture.h" #include <iostream> GLubyte uncompressedheader[12] = {0,0, 2,0,0,0,0,0,0,0,0,0}; GLubyte compressedheader[12] = {0,0,10,0,0,0,0,0,0,0,0,0}; TGA::TGA() { } //Private loading function called by LoadTGA. Loads uncompressed TGA files //Returns: TRUE on success, FALSE on failure bool TGA::LoadCompressedTGA(char *filename,ifstream &texturestream) { return false; } bool TGA::LoadUncompressedTGA(char *filename,ifstream &texturestream) { cout << "G position status:" << texturestream.tellg() << endl; texturestream.read((char*)header, sizeof(header)); //read 6 bytes into the file to get the tga header width = (GLuint)header[1] * 256 + (GLuint)header[0]; //read and calculate width and save height = (GLuint)header[3] * 256 + (GLuint)header[2]; //read and calculate height and save bpp = (GLuint)header[4]; //read bpp and save cout << bpp << endl; if((width <= 0) || (height <= 0) || ((bpp != 24) && (bpp !=32))) //check to make sure the height, width, and bpp are valid { return false; } if(bpp == 24) { type = GL_RGB; } else { type = GL_RGBA; } imagesize = ((bpp/8) * width * height); //determine size in bytes of the image cout << imagesize << endl; imagedata = new GLubyte[imagesize]; //allocate memory for our imagedata variable texturestream.read((char*)imagedata,imagesize); //read according the the size of the image and save into imagedata for(GLuint cswap = 0; cswap < (GLuint)imagesize; cswap += (bpp/8)) //loop through and reverse the tga's BGR format to RGB { imagedata[cswap] ^= imagedata[cswap+2] ^= //1st Byte XOR 3rd Byte XOR 1st Byte XOR 3rd Byte imagedata[cswap] ^= imagedata[cswap+2]; } texturestream.close(); //close ifstream because we're done with it cout << "image loaded" << endl; glGenTextures(1, &texID); // Generate OpenGL texture IDs glBindTexture(GL_TEXTURE_2D, texID); // Bind Our Texture glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // Linear Filtered glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, type, width, height, 0, type, GL_UNSIGNED_BYTE, imagedata); delete imagedata; return true; } //Public loading function for TGA images. Opens TGA file and determines //its type, if any, then loads it and calls the appropriate function. //Returns: TRUE on success, FALSE on failure bool TGA::loadTGA(char *filename) { cout << width << endl; ifstream texturestream; texturestream.open(filename,ios::binary); texturestream.read((char*)header,sizeof(header)); //read 6 bytes into the file, its the header. //if it matches the uncompressed header's first 6 bytes, load it as uncompressed LoadUncompressedTGA(filename,texturestream); return true; } GLubyte* TGA::getImageData() { return imagedata; } GLuint& TGA::getTexID() { return texID; } And here's the quad: void Square::show() { glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texture.texID); //Move to offset glTranslatef( x, y, 0 ); //Start quad glBegin( GL_QUADS ); //Set color to white glColor4f( 1.0, 1.0, 1.0, 1.0 ); //Draw square glTexCoord2f(0.0f, 0.0f); glVertex3f( 0, 0, 0 ); glTexCoord2f(1.0f, 0.0f); glVertex3f( SQUARE_WIDTH, 0, 0 ); glTexCoord2f(1.0f, 1.0f); glVertex3f( SQUARE_WIDTH, SQUARE_HEIGHT, 0 ); glTexCoord2f(0.0f, 1.0f); glVertex3f( 0, SQUARE_HEIGHT, 0 ); //End quad glEnd(); //Reset glLoadIdentity(); }

    Read the article

  • Steganography : Encoded audio and video file not being played, getting corrupted. What is the issue

    - by Shantanu Gupta
    I have made a steganography program to encrypt/Decrypt some text under image audio and video. I used image as bmp(54 byte header) file, audio as wav(44 byte header) file and video as avi(56 byte header) file formats. When I tries to encrypt text under all these file then it gets encrypted successfully and are also getting decrypted correctly. But it is creating a problem with audio and video i.e these files are not being played after encrypted result. What can be the problem. I am working on Turbo C++ compiler. I know it is super outdated compiler but I have to do it in this only. Here is my code to encrypt. int Binary_encode(char *txtSourceFileName, char *binarySourceFileName, char *binaryTargetFileName,const short headerSize) { long BinarySourceSize=0,TextSourceSize=0; char *Buffer; long BlockSize=10240, i=0; ifstream ReadTxt, ReadBinary; //reads ReadTxt.open(txtSourceFileName,ios::binary|ios::in);//file name, mode of open, here input mode i.e. read only if(!ReadTxt) { cprintf("\nFile can not be opened."); return 0; } ReadBinary.open(binarySourceFileName,ios::binary|ios::in);//file name, mode of open, here input mode i.e. read only if(!ReadBinary) { ReadTxt.close();//closing opened file cprintf("\nFile can not be opened."); return 0; } ReadBinary.seekg(0,ios::end);//setting pointer to a file at the end of file. ReadTxt.seekg(0,ios::end); BinarySourceSize=(long )ReadBinary.tellg(); //returns the position of pointer TextSourceSize=(long )ReadTxt.tellg(); //returns the position of pointer ReadBinary.seekg(0,ios::beg); //sets the pointer to the begining of file ReadTxt.seekg(0,ios::beg); //sets the pointer to the begining of file if(BinarySourceSize<TextSourceSize*50) //Minimum size of an image should be 50 times the size of file to be encrypted { cout<<"\n\n"; cprintf("Binary File size should be bigger than text file size."); ReadBinary.close(); ReadTxt.close(); return 0; } cout<<"\n"; cprintf("\n\nSize of Source Image/Audio File is : "); cout<<(float)BinarySourceSize/1024; cprintf("KB"); cout<<"\n"; cprintf("Size of Text File is "); cout<<TextSourceSize; cprintf(" Bytes"); cout<<"\n"; getch(); //write header to file without changing else file will not open //bmp image's header size is 53 bytes Buffer=new char[headerSize]; ofstream WriteBinary; // writes to file WriteBinary.open(binaryTargetFileName,ios::binary|ios::out|ios::trunc);//file will be created or truncated if already exists ReadBinary.read(Buffer,headerSize);//reads no of bytes and stores them into mem, size contains no of bytes in a file WriteBinary.write(Buffer,headerSize);//writes header to 2nd image delete[] Buffer;//deallocate memory /* Buffer = new char[sizeof(long)]; Buffer = (char *)(&TextSourceSize); cout<<Buffer; */ WriteBinary.write((char *)(&TextSourceSize),sizeof(long)); //writes no of byte to be written in image immediate after header ends //to decrypt file if(!(Buffer=new char[TextSourceSize])) { cprintf("Enough Memory could not be assigned."); return 0; } ReadTxt.read(Buffer,TextSourceSize);//read all data from text file ReadTxt.close();//file no more needed WriteBinary.write(Buffer,TextSourceSize);//writes all text file data into image delete[] Buffer;//deallocate memory //replace Tsize+1 below with Tsize and run the program to see the change //this is due to the reason that 50-54 byte no are of colors which we will be changing ReadBinary.seekg(TextSourceSize+1,ios::cur);//move pointer to the location-current loc i.e. 53+content of text file //write remaining image content to image file while(i<BinarySourceSize-headerSize-TextSourceSize+1) { i=i+BlockSize; Buffer=new char[BlockSize]; ReadBinary.read(Buffer,BlockSize);//reads no of bytes and stores them into mem, size contains no of bytes in a file WriteBinary.write(Buffer,BlockSize); delete[] Buffer; //clear memory, else program can fail giving correct output } ReadBinary.close(); WriteBinary.close(); //Encoding Completed return 0; } Code to decrypt int Binary_decode(char *binarySourceFileName, char *txtTargetFileName, const short headerSize) { long TextDestinationSize=0; char *Buffer; long BlockSize=10240; ifstream ReadBinary; ofstream WriteText; ReadBinary.open(binarySourceFileName,ios::binary|ios::in);//file will be appended if(!ReadBinary) { cprintf("File can not be opened"); return 0; } ReadBinary.seekg(headerSize,ios::beg); Buffer=new char[4]; ReadBinary.read(Buffer,4); TextDestinationSize=*((long *)Buffer); delete[] Buffer; cout<<"\n\n"; cprintf("Size of the File that will be created is : "); cout<<TextDestinationSize; cprintf(" Bytes"); cout<<"\n\n"; sleep(1); WriteText.open(txtTargetFileName,ios::binary|ios::out|ios::trunc);//file will be created if not exists else truncate its data while(TextDestinationSize>0) { if(TextDestinationSize<BlockSize) BlockSize=TextDestinationSize; Buffer= new char[BlockSize]; ReadBinary.read(Buffer,BlockSize); WriteText.write(Buffer,BlockSize); delete[] Buffer; TextDestinationSize=TextDestinationSize-BlockSize; } ReadBinary.close(); WriteText.close(); return 0; } int text_encode(char *SourcefileName, char *DestinationfileName) { ifstream fr; //reads ofstream fw; // writes to file char c; int random; clrscr(); fr.open(SourcefileName,ios::binary);//file name, mode of open, here input mode i.e. read only if(!fr) { cprintf("File can not be opened."); getch(); return 0; } fw.open(DestinationfileName,ios::binary|ios::out|ios::trunc);//file will be created or truncated if already exists while(fr) { int i; while(fr!=0) { fr.get(c); //reads a character from file and increments its pointer char ch; ch=c; ch=ch+1; fw<<ch; //appends character in c to a file } } fr.close(); fw.close(); return 0; } int text_decode(char *SourcefileName, char *DestinationName) { ifstream fr; //reads ofstream fw; // wrrites to file char c; int random; clrscr(); fr.open(SourcefileName,ios::binary);//file name, mode of open, here input mode i.e. read only if(!fr) { cprintf("File can not be opened."); return 0; } fw.open(DestinationName,ios::binary|ios::out|ios::trunc);//file will be created or truncated if already exists while(fr) { int i; while(fr!=0) { fr.get(c); //reads a character from file and increments its pointer char ch; ch=c; ch=ch-1; fw<<ch; //appends character in c to a file } } fr.close(); fw.close(); return 0; }

    Read the article

  • Create PDF document using iTextSharp in ASP.Net 4.0 and MemoryMappedFile

    - by sreejukg
    In this article I am going to demonstrate how ASP.Net developers can programmatically create PDF documents using iTextSharp. iTextSharp is a software component, that allows developers to programmatically create or manipulate PDF documents. Also this article discusses the process of creating in-memory file, read/write data from/to the in-memory file utilizing the new feature MemoryMappedFile. I have a database of users, where I need to send a notice to all my users as a PDF document. The sending mail part of it is not covered in this article. The PDF document will contain the company letter head, to make it more official. I have a list of users stored in a database table named “tblusers”. For each user I need to send customized message addressed to them personally. The database structure for the users is give below. id Title Full Name 1 Mr. Sreeju Nair K. G. 2 Dr. Alberto Mathews 3 Prof. Venketachalam Now I am going to generate the pdf document that contains some message to the user, in the following format. Dear <Title> <FullName>, The message for the user. Regards, Administrator Also I have an image, bg.jpg that contains the background for the document generated. I have created .Net 4.0 empty web application project named “iTextSharpSample”. First thing I need to do is to download the iTextSharp dll from the source forge. You can find the url for the download here. http://sourceforge.net/projects/itextsharp/files/ I have extracted the Zip file and added the itextsharp.dll as a reference to my project. Also I have added a web form named default.aspx to my project. After doing all this, the solution explorer have the following view. In the default.aspx page, I inserted one grid view and associated it with a SQL Data source control that bind data from tblusers. I have added a button column in the grid view with text “generate pdf”. The output of the page in the browser is as follows. Now I am going to create a pdf document when the user clicking on the Generate PDF button. As I mentioned before, I am going to work with the file in memory, I am not going to create a file in the disk. I added an event handler for button by specifying onrowcommand event handler. My gridview source looks like <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" Width="481px" CellPadding="4" ForeColor="#333333" GridLines="None" onrowcommand="Generate_PDF" > ………………………………………………………………………….. ………………………………………………………………………….. </asp:GridView> In the code behind, I wrote the corresponding event handler. protected void Generate_PDF(object sender, GridViewCommandEventArgs e) { // The button click event handler code. // I am going to explain the code for this section in the remaining part of the article } The Generate_PDF method is straight forward, It get the title, fullname and message to some variables, then create the pdf using these variables. The code for getting data from the grid view is as follows // get the row index stored in the CommandArgument property int index = Convert.ToInt32(e.CommandArgument); // get the GridViewRow where the command is raised GridViewRow selectedRow = ((GridView)e.CommandSource).Rows[index]; string title = selectedRow.Cells[1].Text; string fullname = selectedRow.Cells[2].Text; string msg = @"There are some changes in the company policy, due to this matter you need to submit your latest address to us. Please update your contact details / personnal details by visiting the member area of the website. ................................... "; since I don’t want to save the file in the disk, I am going the new feature introduced in .Net framework 4, called Memory-Mapped Files. Using Memory-Mapped mapped file, you can created non-persisted memory mapped files, that are not associated with a file in a disk. So I am going to create a temporary file in memory, add the pdf content to it, then write it to the output stream. To read more about MemoryMappedFile, read this msdn article http://msdn.microsoft.com/en-us/library/dd997372.aspx The below portion of the code using MemoryMappedFile object to create a test pdf document in memory and perform read/write operation on file. The CreateViewStream() object will give you a stream that can be used to read or write data to/from file. The code is very straight forward and I included comment so that you can understand the code. using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test1.pdf", 1000000)) { // Create a new pdf document object using the constructor. The parameters passed are document size, left margin, right margin, top margin and bottom margin. iTextSharp.text.Document d = new iTextSharp.text.Document(PageSize.A4, 72,72,172,72); //get an instance of the memory mapped file to stream object so that user can write to this using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { // associate the document to the stream. PdfWriter.GetInstance(d, stream); /* add an image as bg*/ iTextSharp.text.Image jpg = iTextSharp.text.Image.GetInstance(Server.MapPath("Image/bg.png")); jpg.Alignment = iTextSharp.text.Image.UNDERLYING; jpg.SetAbsolutePosition(0, 0); //this is the size of my background letter head image. the size is in points. this will fit to A4 size document. jpg.ScaleToFit(595, 842); d.Open(); d.Add(jpg); d.Add(new Paragraph(String.Format("Dear {0} {1},", title, fullname))); d.Add(new Paragraph("\n")); d.Add(new Paragraph(msg)); d.Add(new Paragraph("\n")); d.Add(new Paragraph(String.Format("Administrator"))); d.Close(); } //read the file data byte[] b; using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { BinaryReader rdr = new BinaryReader(stream); b = new byte[mmf.CreateViewStream().Length]; rdr.Read(b, 0, (int)mmf.CreateViewStream().Length); } Response.Clear(); Response.ContentType = "Application/pdf"; Response.BinaryWrite(b); Response.End(); } Press ctrl + f5 to run the application. First I got the user list. Click on the generate pdf icon. The created looks as follows. Summary: Creating pdf document using iTextSharp is easy. You will get lot of information while surfing the www. Some useful resources and references are mentioned below http://itextsharp.com/ http://www.mikesdotnetting.com/Article/82/iTextSharp-Adding-Text-with-Chunks-Phrases-and-Paragraphs http://somewebguy.wordpress.com/2009/05/08/itextsharp-simplify-your-html-to-pdf-creation/ Hope you enjoyed the article.

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Facebook - finding users by name

    - by Gublooo
    Hello Guys, I'm very new to facebook and wanted to know if this is even possible to do with facebook API. If a user searches for a name on my website - say "Jamie Smith" - I want to pass this name to facebook and find all users that match that name - so if I can get back their photo and name to display on my site - so users can identify the right person. I'm using PHP so if there's any example or link that you can provide will be really helpful. Thanks

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >