Search Results

Search found 14435 results on 578 pages for 'disk usage'.

Page 569/578 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • FreeType2 Bitmap to System::Drawing::Bitmap.

    - by Dennis Roche
    Hi, I'm trying to convert a FreeType2 bitmap to a System::Drawing::Bitmap in C++/CLI. FT_Bitmap has a unsigned char* buffer that contains the data to write. I have got somewhat working save it disk as a *.tga, but when saving as *.bmp it renders incorrectly. I believe that the size of byte[] is incorrect and that my data is truncated. Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats etc. would be helpful. Thanks!! C++/CLI code. FT_Bitmap *bitmap = &face->glyph->bitmap; int width = (face->bitmap->metrics.width / 64); int height = (face->bitmap->metrics.height / 64); // must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); // as *.tga void *buffer = bytes ? malloc(bytes) : NULL; if (buffer) { memset(buffer, 0, bytes); for (int i = 0; i < glyph->rows; ++i) memcpy((char *)buffer + (i * width), glyph->buffer + (i * glyph->pitch), glyph->pitch); WriteTGA("Test.tga", buffer, width, height); } array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes); // as *.bmp Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); systemBitmap->UnlockBits(bitmapData); systemBitmap->Save("Test.bmp"); Reference, FT_Bitmap typedef struct FT_Bitmap_ { int rows; int width; int pitch; unsigned char* buffer; short num_grays; char pixel_mode; char palette_mode; void* palette; } FT_Bitmap; Reference, WriteTGA bool WriteTGA(const char *filename, void *pxl, uint16 width, uint16 height) { FILE *fp = NULL; fopen_s(&fp, filename, "wb"); if (fp) { TGAHeader header; memset(&header, 0, sizeof(TGAHeader)); header.imageType = 3; header.width = width; header.height = height; header.depth = 8; header.descriptor = 0x20; fwrite(&header, sizeof(header), 1, fp); fwrite(pxl, sizeof(uint8) * width * height, 1, fp); fclose(fp); return true; } return false; }

    Read the article

  • unsigned char* buffer (FreeType2 Bitmap) to System::Drawing::Bitmap.

    - by Dennis Roche
    Hi, I'm trying to convert a FreeType2 bitmap to a System::Drawing::Bitmap in C++/CLI. FT_Bitmap has a unsigned char* buffer that contains the data to write. I have got somewhat working save it disk as a *.tga, but when saving as *.bmp it renders incorrectly. I believe that the size of byte[] is incorrect and that my data is truncated. Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats etc. would be helpful. Thanks!! C++/CLI code. FT_Bitmap *bitmap = &face->glyph->bitmap; int width = (face->bitmap->metrics.width / 64); int height = (face->bitmap->metrics.height / 64); // must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); // as *.tga void *buffer = bytes ? malloc(bytes) : NULL; if (buffer) { memset(buffer, 0, bytes); for (int i = 0; i < glyph->rows; ++i) memcpy((char *)buffer + (i * width), glyph->buffer + (i * glyph->pitch), glyph->pitch); WriteTGA("Test.tga", buffer, width, height); } // as *.bmp array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes); Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); systemBitmap->UnlockBits(bitmapData); systemBitmap->Save("Test.bmp"); Reference, FT_Bitmap typedef struct FT_Bitmap_ { int rows; int width; int pitch; unsigned char* buffer; short num_grays; char pixel_mode; char palette_mode; void* palette; } FT_Bitmap; Reference, WriteTGA bool WriteTGA(const char *filename, void *pxl, uint16 width, uint16 height) { FILE *fp = NULL; fopen_s(&fp, filename, "wb"); if (fp) { TGAHeader header; memset(&header, 0, sizeof(TGAHeader)); header.imageType = 3; header.width = width; header.height = height; header.depth = 8; header.descriptor = 0x20; fwrite(&header, sizeof(header), 1, fp); fwrite(pxl, sizeof(uint8) * width * height, 1, fp); fclose(fp); return true; } return false; } Update FT_Bitmap *bitmap = &face->glyph->bitmap; // stride must be aligned on a 32 bit boundary or 4 bytes int depth = 8; int stride = ((width * depth + 31) & ~31) >> 3; int bytes = (int)(stride * height); target = gcnew Bitmap(width, height, PixelFormat::Format8bppIndexed); // create bitmap data, lock pixels to be written. BitmapData^ bitmapData = target->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, target->PixelFormat); array<Byte>^ values = gcnew array<Byte>(bytes); Marshal::Copy((IntPtr)bitmap->buffer, values, 0, bytes); Marshal::Copy(values, 0, bitmapData->Scan0, bytes); target->UnlockBits(bitmapData);

    Read the article

  • MODI leaking memory

    - by Khragg
    I have an app where I'm using MODI 2007 to OCR several multi-page tiff files. I have found that when I kick it off on a directory that contains several good tiffs but also some tiffs that cannot be opened in Windows Picture and Fax Viewer, then MODI also fails to OCR those "bad" tiffs. When this happens, the app is unable to reclaim any of the memory that was used by MODI to OCR those tiffs. After the tool tries to OCR too many of these "bad" tiffs, the machine runs out of memory and the app crashes. I have tried several code fixes from the web that supposedly fix any MODI memory leaks, but so far none have worked for me. I am pasting in the part of the code below that does the OCRing: StringBuilder strRecText = new StringBuilder(10000); MODI.Document doc1 = new MODI.Document(); doc1.Create(name); try { doc1.OCR(MODI.MiLANGUAGES.miLANG_ENGLISH, true, true); // this will ocr all pages of a multi-page tiff file } catch (Exception e) { doc1.Close(false); // clean up if (doc1 != null) { GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); System.Runtime.InteropServices.Marshal.FinalReleaseComObject(doc1); doc1 = null; } } MODI.Images images = doc1.Images; for (int imageCounter = 0; imageCounter < images.Count; imageCounter++) { if (imageCounter > 0) { if (!noPageBreakFlag) { strRecText.Append((char)pageBreakChar); } } MODI.Image image = (MODI.Image)images[imageCounter]; MODI.Layout layout = image.Layout; strRecText.Append(layout.Text); GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); if (layout != null) { System.Runtime.InteropServices.Marshal.FinalReleaseComObject(layout); layout = null; } if (image != null) { System.Runtime.InteropServices.Marshal.FinalReleaseComObject(image); image = null; } } File.AppendAllText(ocrFile, strRecText.ToString()); // write the OCR file out to disk GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); if (images != null) { System.Runtime.InteropServices.Marshal.FinalReleaseComObject(images); images = null; } GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); doc1.Close(false); // clean up if (doc1 != null) { System.Runtime.InteropServices.Marshal.FinalReleaseComObject(doc1); doc1 = null; } GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers();

    Read the article

  • WCF Stream.Read always returns 0 in client

    - by G_M
    I've spent most of my day trying to figure out why this isn't working. I have a WCF service that streams an object to the client. The client is then supposed to write the file to its disk. But when I call stream.Read(buffer, 0, bufferLength) it always returns 0. Here's my code: namespace StreamServiceNS { [ServiceContract] public interface IStreamService { [OperationContract] Stream downloadStreamFile(); } } class StreamService : IStreamService { public Stream downloadStreamFile() { ISSSteamFile sFile = getStreamFile(); BinaryFormatter bf = new BinaryFormatter(); MemoryStream stream = new MemoryStream(); bf.Serialize(stream, sFile); return stream; } } Service config file: <system.serviceModel> <services> <service name="StreamServiceNS.StreamService"> <endpoint address="stream" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IStreamService" name="BasicHttpEndpoint_IStreamService" contract="SWUpdaterService.ISWUService" /> </service> </services> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IStreamService" transferMode="StreamedResponse" maxReceivedMessageSize="209715200"></binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceThrottling maxConcurrentCalls ="100" maxConcurrentSessions="400"/> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> Client: TestApp.StreamServiceRef.StreamServiceClient client = new StreamServiceRef.StreamServiceClient(); try { Stream stream = client.downloadStreamFile(); int bufferLength = 8 * 1024; byte[] buffer = new byte[bufferLength]; FileStream fs = new FileStream(@"C:\test\testFile.exe", FileMode.Create, FileAccess.Write); int bytesRead; while ((bytesRead = stream.Read(buffer, 0, bufferLength)) > 0) { fs.Write(buffer, 0, bytesRead); } stream.Close(); fs.Close(); } catch (Exception e) { Console.WriteLine("Error: " + e.Message); } Client app.config: <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpEndpoint_IStreamService" maxReceivedMessageSize="209715200" transferMode="StreamedResponse"> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://[server]/StreamServices/streamservice.svc/stream" binding="basicHttpBinding" bindingConfiguration="BasicHttpEndpoint_IStreamService" contract="StreamServiceRef.IStreamService" name="BasicHttpEndpoint_IStreamService" /> </client> </system.serviceModel> (some code clipped for brevity) I've read everything I can find on making WCF streaming services, and my code looks no different than theirs. I can replace the streaming with buffering and send an object that way fine, but when I try to stream, the client always sees the stream as "empty". The testFile.exe gets created, but its size is 0KB. What am I missing?

    Read the article

  • Disable antialiasing for a specific GDI device context

    - by Jacob Stanley
    I'm using a third party library to render an image to a GDI DC and I need to ensure that any text is rendered without any smoothing/antialiasing so that I can convert the image to a predefined palette with indexed colors. The third party library i'm using for rendering doesn't support this and just renders text as per the current windows settings for font rendering. They've also said that it's unlikely they'll add the ability to switch anti-aliasing off any time soon. The best work around I've found so far is to call the third party library in this way (error handling and prior settings checks ommitted for brevity): private static void SetFontSmoothing(bool enabled) { int pv = 0; SystemParametersInfo(Spi.SetFontSmoothing, enabled ? 1 : 0, ref pv, Spif.None); } // snip Graphics graphics = Graphics.FromImage(bitmap) IntPtr deviceContext = graphics.GetHdc(); SetFontSmoothing(false); thirdPartyComponent.Render(deviceContext); SetFontSmoothing(true); This obviously has a horrible effect on the operating system, other applications flicker from cleartype enabled to disabled and back every time I render the image. So the question is, does anyone know how I can alter the font rendering settings for a specific DC? Even if I could just make the changes process or thread specific instead of affecting the whole operating system, that would be a big step forward! (That would give me the option of farming this rendering out to a separate process- the results are written to disk after rendering anyway) EDIT: I'd like to add that I don't mind if the solution is more complex than just a few API calls. I'd even be happy with a solution that involved hooking system dlls if it was only about a days work. EDIT: Background Information The third-party library renders using a palette of about 70 colors. After the image (which is actually a map tile) is rendered to the DC, I convert each pixel from it's 32-bit color back to it's palette index and store the result as an 8bpp greyscale image. This is uploaded to the video card as a texture. During rendering, I re-apply the palette (also stored as a texture) with a pixel shader executing on the video card. This allows me to switch and fade between different palettes instantaneously instead of needing to regenerate all the required tiles. It takes between 10-60 seconds to generate and upload all the tiles for a typical view of the world. EDIT: Renamed GraphicsDevice to Graphics The class GraphicsDevice in the previous version of this question is actually System.Drawing.Graphics. I had renamed it (using GraphicsDevice = ...) because the code in question is in the namespace MyCompany.Graphics and the compiler wasn't able resolve it properly. EDIT: Success! I even managed to port the PatchIat function below to C# with the help of Marshal.GetFunctionPointerForDelegate. The .NET interop team really did a fantastic job! I'm now using the following syntax, where Patch is an extension method on System.Diagnostics.ProcessModule: module.Patch( "Gdi32.dll", "CreateFontIndirectA", (CreateFontIndirectA original) => font => { font->lfQuality = NONANTIALIASED_QUALITY; return original(font); }); private unsafe delegate IntPtr CreateFontIndirectA(LOGFONTA* lplf); private const int NONANTIALIASED_QUALITY = 3; [StructLayout(LayoutKind.Sequential)] private struct LOGFONTA { public int lfHeight; public int lfWidth; public int lfEscapement; public int lfOrientation; public int lfWeight; public byte lfItalic; public byte lfUnderline; public byte lfStrikeOut; public byte lfCharSet; public byte lfOutPrecision; public byte lfClipPrecision; public byte lfQuality; public byte lfPitchAndFamily; public unsafe fixed sbyte lfFaceName [32]; }

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • being able to solve google code jam problem sets

    - by JPro
    This is not a homework question, but rather my intention to know if this is what it takes to learn programming. I keep loggin into TopCoder not to actually participate but to get the basic understand of how the problems are solved. But to my knowledge I don't understand what the problem is and how to translate the problem into an algorithm that can solve it. Just now I happen to look at ACM ICPC 2010 World Finals which is being held in china. The teams were given problem sets and one of them is this: Given at most 100 points on a plan with distinct x-coordinates, find the shortest cycle that passes through each point exactly once, goes from the leftmost point always to the right until it reaches the rightmost point, then goes always to the left until it gets back to the leftmost point. Additionally, two points are given such that the the path from left to right contains the first point, and the path from right to left contains the second point. This seems to be a very simple DP: after processing the last k points, and with the first path ending in point a and the second path ending in point b, what is the smallest total length to achieve that? This is O(n^2) states, transitions in O(n). We deal with the two special points by forcing the first path to contain the first one, and the second path contain the second one. Now I have no idea what I am supposed to solve after reading the problem set. and there's an other one from google code jam: Problem In a big, square room there are two point light sources: one is red and the other is green. There are also n circular pillars. Light travels in straight lines and is absorbed by walls and pillars. The pillars therefore cast shadows: they do not let light through. There are places in the room where no light reaches (black), where only one of the two light sources reaches (red or green), and places where both lights reach (yellow). Compute the total area of each of the four colors in the room. Do not include the area of the pillars. Input * One line containing the number of test cases, T. Each test case contains, in order: * One line containing the coordinates x, y of the red light source. * One line containing the coordinates x, y of the green light source. * One line containing the number of pillars n. * n lines describing the pillars. Each contains 3 numbers x, y, r. The pillar is a disk with the center (x, y) and radius r. The room is the square described by 0 = x, y = 100. Pillars, room walls and light sources are all disjoint, they do not overlap or touch. Output For each test case, output: Case #X: black area red area green area yellow area Is it required that people who program should be should be able to solve these type of problems? I would apprecite if anyone can help me interpret the google code jam problem set as I wish to participate in this years Code Jam to see if I can do anthing or not. Thanks.

    Read the article

  • Windows NT Service shutdown issues

    - by Jeremiah Gowdy
    I have developed middleware that provides RPC functionality to multiple client applications on multiple platforms within our organization. The middleware is written in C# and runs as a Windows NT Service. It handles things like file access to network shares, database access, etc. The middleware is hosted on two high end systems running Windows Server 2008 R2. When one of our server administrators goes to reboot the machine, primarily to do Windows Updates, there are serious problems with how the system behaves in regards to my NT Service. My service is designed to immediately stop listening for new connections, immediately start refusing new requests on existing connections, and otherwise shut down as rapidly as possible in the case of an OnStop or OnShutdown request from the SCM. Still, to maintain system integrity, operations that are currently in progress are allowed to continue for a reasonable time. Usually the server shuts down inside of 30 seconds (when the service is manually stopped for example). However, when the system is instructed to restart, my service immediately loses access to network drives and UNC paths, causing data integrity problems for any open files and partial writes to those locations. My service does list Workstation (and thus SMB Redirector) as a dependency, so I would think that my service would need to be stopped prior to Workstation/Redirector being stopped if Windows were honoring those dependencies. Basically, my application is forced to crash and burn, failing remote procedure calls and eventually being forced to terminate by the operating system after a timeout period has elapsed (seems to be on the order of 20-30 seconds). Unlike a Windows application, my Windows NT Service doesn't seem to have any power to stop a system shutdown in progress, delay the system shutdown, or even just the opportunity to save out any pending network share disk writes before being forcibly disconnected and shutdown. How is an NT Service developer supposed to have any kind of application integrity in this environment? Why is it that Forms Applications get all of the opportunity to finish their business prior to shutdown, while services seem to get no such benefits? I have tried: Calling SetProcessShutdownParameters via p/invoke to try to notify my application of the shutdown sooner to avoid Redirector shutting down before I do. Calling ServiceBase.RequestAdditionalTime with a value less than or equal to the two minute limit. Tweaking the WaitToKillServiceTimeout Everything I can think of to make my service shutdown faster. But in the end, I still get ~30 seconds of problematic time in which my service doesn't even seem to have been notified of an OnShutdown event yet, but requests are failing due to redirector no longer servicing my network share requests. How is this issue meant to be resolved? What can I do to delay or stop the shutdown, or at least be allowed to shut down my active tasks without Redirector services disappearing out from under me? I can understand what Microsoft is trying to do to prevent services from dragging their feet and showing shutdowns, but that seems like a great goal for Windows client operating systems, not for servers. I don't want my servers to shutdown fast, I want operational integrity and graceful shutdowns. Thanks in advance for any help you can provide. PS in regards to writing my own middleware, this is for a telephony application with sub-second "soft-realtime" response time requirements. It does make sense, and it's not a point I'm looking to debate. :)

    Read the article

  • How do I properly add existing source code files to my Xcode project?

    - by BeachRunnerJoe
    I'm new to iPhone development and I'm still getting familiar with the Mac dev environment, including Xcode. I want to add some 3rd party code to my iPhone project, but when I add the "existing files" to my Xcode project, I'm presented with a dialog box that has far too many options that I don't understand and, as such, my project isn't working. When I #import headerfilename.h, I get a build error that reads headerfilename.h: No such file or directory. Can anyone explain to me what all these options mean or give me a link to some documentation that can? I'm having a hard time finding anything in Apple's docs. Which options do I want to choose to add existing source code files to my Xcode project? I should note that the source code files that I'm trying to add are located in my project/Classes/frameworkname/ directory. After they're added, do I need to reference this new code directory in my project settings anywhere (i.e. some kind of header file directory variable)? Thanks so much! Update: I found the following answers/responses on the apple dev forums that were very useful and helped me fix my issue... To make it simple : - if you do not check the copy option, the file stay where it is. - if you check it, it is copied in your project folders In the first case (what it seems you are doing) you need to tell the compiler that the header files are in another directory : - project info - build - search paths - User Header Search Path : add the directory from where you took the header file Hope this will help You have discovered the most confusing dialog box that ever came out of Cupertino. Six years of Xcode, and this thing still is partly a mystery to me. To even get that far, I had to make many test projects to try and reverse-engineer what this thing does. The "Copy" box means that it will copy the files as they are right now, into the project. If this box is not checked, then it just references those files during a build and copies them as they are at THAT time. For source code, you want the Copy box checked. The "relative to" is a total mystery to me and I can't help you with that. I usually leave it however it is already set. Does it mean relative to where they are on disk, or the arrangement in Xcode, or in the bundle? Who knows. The last 2 radio buttons SEEM to mean that it will either re-create the folder structure of the folder you are adding, or just put "fake" folders in Xcode that point to the real folders. This is probably your problem - you are adding source code that is not all at the top level, and when it goes to find it, it does not re-create the hierarchy. Others can supply a better way, hopefully, but what I would do is put all of the source in one folder and add that, using the Copy box. Then in Xcode you can make whatever bogus folders you want and put the source file names in those fake folders.

    Read the article

  • Calling system commands from Perl

    - by Dan J
    In an older version of our code, we called out from Perl to do an LDAP search as follows: # Pass the base DN in via the ldapsearch-specific environment variable # (rather than as the "-b" paramater) to avoid problems of shell # interpretation of special characters in the DN. $ENV{LDAP_BASEDN} = $ldn; $lcmd = "ldapsearch -x -T -1 -h $gLdapServer" . <snip> " > $lworkfile 2>&1"; system($lcmd); if (($? != 0) || (! -e "$lworkfile")) { # Handle the error } The code above would result in a successful LDAP search, and the output of that search would be in the file $lworkfile. Unfortunately, we recently reconfigured openldap on this server so that a "BASE DC=" is specified in /etc/openldap/ldap.conf and /etc/ldap.conf. That change seems to mean ldapsearch ignores the LDAP_BASEDN environment variable, and so my ldapsearch fails. I've tried a couple of different fixes but without success so far: (1) I tried going back to using the "-b" argument to ldapsearch, but escaping the shell metacharacters. I started writing the escaping code: my $ldn_escaped = $ldn; $ldn_escaped =~ s/\/\\/g; $ldn_escaped =~ s/`/\`/g; $ldn_escaped =~ s/$/\$/g; $ldn_escaped =~ s/"/\"/g; That threw up some Perl errors because I haven't escaped those regexes properly in Perl (the line number matches the regex with the backticks in). Backticks found where operator expected at /tmp/mycommand line 404, at end of line At the same time I started to doubt this approach and looked for a better one. (2) I then saw some Stackoverflow questions (here and here) that suggested a better solution. Here's the code: print("Processing..."); # Pass the arguments to ldapsearch by invoking open() with an array. # This ensures the shell does NOT interpret shell metacharacters. my(@cmd_args) = ("-x", "-T", "-1", "-h", "$gLdapPool", "-b", "$ldn", <snip> ); $lcmd = "ldapsearch"; open my $lldap_output, "-|", $lcmd, @cmd_args; while (my $lline = <$lldap_output>) { # I can parse the contents of my file fine } $lldap_output->close; The two problems I am having with approach (2) are: a) Calling open or system with an array of arguments does not let me pass > $lworkfile 2>&1 to the command, so I can't stop the ldapsearch output being sent to screen, which makes my output look ugly: Processing...ldap_bind: Success (0) additional info: Success b) I can't figure out how to choose which location (i.e. path and file name) to the file handle passed to open, i.e. I don't know where $lldap_output is. Can I move/rename it, or inspect it to find out where it is (or is it not actually saved to disk)? Based on the problems with (2), this makes me think I should return back to approach (1), but I'm not quite sure how to

    Read the article

  • C++/msvc6 application crashes due to heap corruption, any hints?

    - by David Alfonso
    Hello all, let me say first that I'm writing this question after months of trying to find out the root of a crash happening in our application. I'll try to detail as much as possible what I've already found out about it. About the application It runs on Windows XP Professional SP2. It's built with Microsoft Visual C++ 6.0 with Service Pack 6. It's MFC based. It uses several external dlls (e.g. Xerces, ZLib or ACE). It has high performance requirements. It does a lot of network and hard disk I/O, but it's also cpu intensive. It has an exception handling mechanism which generates a minidump when an unhandled exception occurs. Facts about the crash It only happens on multiprocessor/multicore machines and under heavy loads of work. It happens at random (neither we nor our client have found a pattern yet). We cannot reproduce the crash on our testing lab. It only happens on some production systems (but always in multicore machines) It always ends up crashing at the same point, although the complete stack is not always the same. Let me add the stack of the crashing thread (obtained using WinDbg, sorry we don't have symbols) ChildEBP RetAddr Args to Child WARNING: Stack unwind information not available. Following frames may be wrong. 030af6c8 7c9206eb 77bfc3c9 01a80000 00224bc3 MyApplication+0x2a85b9 030af960 7c91e9c0 7c92901b 00000ab4 00000000 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030af98c 7c9205c8 00000001 00000000 00000000 ntdll!ZwWaitForSingleObject+0xc (FPO: [3,0,0]) 030af9c0 7c920551 01a80898 7c92056d 313adfb0 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030afa8c 4ba3ae96 000307da 00130005 00040012 ntdll!RtlFreeHeap+0x1e9 (FPO: [Non-Fpo]) 030afacc 77bfc2e3 0214e384 3087c8d8 02151030 0x4ba3ae96 030afb00 7c91e306 7c80bfc1 00000948 00000001 msvcrt!free+0xc8 (FPO: [Non-Fpo]) 030afb20 0042965b 030afcc0 0214d780 02151218 ntdll!ZwReleaseSemaphore+0xc (FPO: [3,0,0]) 030afb7c 7c9206eb 02e6c471 02ea0000 00000008 MyApplication+0x2965b 030afe60 7c9205c8 02151248 030aff38 7c920551 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030afe74 7c92056d 0210bfb8 02151250 02151250 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030aff38 77bfc2de 01a80000 00000000 77bfc2e3 ntdll!RtlFreeHeap+0x647 (FPO: [Non-Fpo]) 7c92056d c5ffffff ce7c94be ff7c94be 00ffffff msvcrt!free+0xc3 (FPO: [Non-Fpo]) 7c920575 ff7c94be 00ffffff 12000000 907c94be 0xc5ffffff 7c920579 00ffffff 12000000 907c94be 90909090 0xff7c94be *** WARNING: Unable to verify checksum for xerces-c_2_7.dll *** ERROR: Symbol file could not be found. Defaulted to export symbols for xerces-c_2_7.dll - 7c92057d 12000000 907c94be 90909090 8b55ff8b MyApplication+0xbfffff 7c920581 907c94be 90909090 8b55ff8b 08458bec xerces_c_2_7 7c920585 90909090 8b55ff8b 08458bec 04408b66 0x907c94be 7c920589 8b55ff8b 08458bec 04408b66 0004c25d 0x90909090 7c92058d 08458bec 04408b66 0004c25d 90909090 0x8b55ff8b The address MyApplication+0x2a85b9 corresponds to a call to erase() of a std::list. What I have tried so far Reviewing all the code related to the point where the crash ends happening. Trying to enable pageheap on our testing lab though nothing useful has been found by now. We have substituted the std::list for a C array and then it crashes in other part of the code (although it is related code, it's not in the code where the old list resided). Coincidentally, now it crashes in another erase, though this time of a std::multiset. Let me copy the stack contained in the dump: ntdll.dll!_RtlpCoalesceFreeBlocks@16() + 0x124e bytes ntdll.dll!_RtlFreeHeap@12() + 0x91f bytes msvcrt.dll!_free() + 0xc3 bytes MyApplication.exe!006a4fda() [Frames below may be incorrect and/or missing, no symbols loaded for MyApplication.exe] MyApplication.exe!0069f305() ntdll.dll!_NtFreeVirtualMemory@16() + 0xc bytes ntdll.dll!_RtlpSecMemFreeVirtualMemory@16() + 0x1b bytes ntdll.dll!_ZwWaitForSingleObject@12() + 0xc bytes ntdll.dll!_RtlpFreeToHeapLookaside@8() + 0x26 bytes ntdll.dll!_RtlFreeHeap@12() + 0x114 bytes msvcrt.dll!_free() + 0xc3 bytes c5ffffff() Possible solutions (that I'm aware of) which cannot be applied "Migrate the application to a newer compiler": We are working on this but It's not a solution at the moment. "Enable pageheap (normal or full)": We can't enable pageheap on production machines as this affects performance heavily. I think that's all I remember now, if I have forgotten something I'll add it asap. If you can give me some hint or propose some possible solution, don't hesitate to answer! Thank you in advance for your time and advice.

    Read the article

  • How do I edit an XML file with Perl?

    - by thebourneid
    I have a movie collection catalogue with local links to folders and files for an easy access. Recently I reorganaized my entire hard disk space and I need to update the links and I'm trying to do that automatically with Perl. I can export the data in a XML file and import it again. I can extract the new filepaths with the use of File::Find but I'm stuck with two problems. I have no idea how to connect the $title from the new filepath with the corresponding $title from the XML file. I'm dealing with such files for the first time and I don't know how to proceed with the replacement process. Here is what I've done till now use strict; use warnings; use File::Basename; use File::Find; use File::Spec; use XML::Simple; use Data::Dumper; my $dir_target = 'D:/Movies/'; my %titles_locations = (); find(\&file_handler, $dir_target); sub file_handler { /\.iso$/ or return; my $fn = $File::Find::name; $fn =~ s/\//\\/g; $fn =~ /(.*\\)(.*)/; my $path = $1; my $filename = $2; my $title = (File::Spec->splitdir($fn))[2]; $title =~ s/(.*?)\s\(\d+\)$/$1/; $title =~ s/~/:/; $title =~ s/`/?/; my $link_local = '<link><description>Folder</description><url>'.$path.'</url><urltype>Movie</urltype></link><link><description>'.$filename.'</description><url>'.$fn.'</url><urltype>Movie</urltype></link>' unless $title eq ''; $titles_locations{$title} = {'filename'=>$filename, 'path'=>$path }; } my $xml_in = XMLin('somepath/test.xml', ForceArray => 1, KeepRoot => 1); my $title = {'key1' => 'title', 'key2' => 'links'}; foreach my $link (keys %$title) { } print Data::Dumper->Dump([$title]); my $xml_out = XMLout($xml_in, OutputFile => 'somepath/test_out.xml', KeepRoot=>1); And here is a snippet of the data I need to edit. If found imdb and dvdempire link - do not touch. if found local links replace, otherwise insert. I'm willing to complete the code myself but need some directions how to proceed further. Thanks. <title>$title</title> ....... <links> <link> <description>IMDB</description> <url>http://www.imdb.com/title/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>DVD Empire</description> <url>http://www.dvdempire.com/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>Folder</description> <url>OLD_FOLDERPATH</url> <urltype>Movie</urltype> </link> <link> <description>OLD_FILENAME</description> <url>OLD_FILENAMEPATH</url> <urltype>Movie</urltype> </link> </links>

    Read the article

  • How To Edit XML File

    - by thebourneid
    I have a movie collection catalogue with local links to folders and files for an easy access. Recently I reorganaized my entire hard disk space and I need to update the links and I'm trying to do that automatically with perl. I can export the data in a xml file and import it again. I can extract the new filepaths with the use of File::Find but I'm stuck with two problems. I have no idea how to connect the $title from the new filepath with the corresponding $title from the xml file. I'm dealing with such files for the first time and I don't know how to proceed with the replacement process. Here is what I've done till now use strict; use warnings; use File::Basename; use File::Find; use File::Spec; use XML::Simple; use Data::Dumper; my $dir_target = 'somepath'; find(\&a, $dir_target); sub a { /\.iso$/ or return; my $fn = $File::Find::name; $fn =~ s/\//\\/g; $fn =~ /(.*\\)(.*)/; my $path = $1; my $filename = $2; my $title = (File::Spec->splitdir($fn))[2]; $title =~ s/(.*?)\s\(\d+\)$/$1/; $title =~ s/~/:/; $title =~ s/`/?/; my $link_local = '<link><description>Folder</description><url>'.$path.'</url><urltype>Movie</urltype></link><link><description>'.$filename.'</description><url>'.$fn.'</url><urltype>Movie</urltype></link>' unless $title eq ''; my $txt = 'somepath/log.txt'; my $xml_in = XMLin('somepath/test.xml', ForceArray => 1, KeepRoot => 1); my $xml_out = XMLout($xml_in, OutputFile => 'somepath/test_out.xml', KeepRoot=>1); open F, ">>", $txt; print F $link_local."\n\n"; close F; } And here is a snippet of the data I need to edit. If found imdb and dvdempire link - do not touch. if found local links replace, otherwise insert. I'm willing to complete the code myself but need some directions how to proceed further. Thanks. <title>$title</title> ....... <links> <link> <description>IMDB</description> <url>http://www.imdb.com/title/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>DVD Empire</description> <url>http://www.dvdempire.com/VARIABLE</url> <urltype>URL</urltype> </link> <link> <description>Folder</description> <url>OLD_FOLDERPATH</url> <urltype>Movie</urltype> </link> <link> <description>OLD_FILENAME</description> <url>OLD_FILENAMEPATH</url> <urltype>Movie</urltype> </link> </links>

    Read the article

  • sql perfomance on new server

    - by Rapunzo
    My database is running on a pc (AMD Phenom x6, intel ssd disk, 8GB DDR3 RAM and windows 7 OS + sql server 2008 R2 sp3 ) and it started working hard, timeout problems and up to 30 seconds long queries after 200 mb of database And I also have an old server pc (IBM x-series 266: 72*3 15k rpm scsi discs with raid5, 4 gb ram and windows server 2003 + sql server 2008 R2 sp3 ) and same query start to give results in 100 seconds.. I tried query analyser tool for tuning my indexed. but not so much improvements. its a big dissapointment for me. because I thought even its an old server pc it should be more powerfull with 15k rpm discs with raid5. what should I do. do I need $10.000 new server to get a good performance for my sql server? cant I use that IBM server? Extra information: there is 50 sql users and its an ERP program. There is my query ALTER FUNCTION [dbo].[fnDispoTerbiye] ( ) RETURNS TABLE AS RETURN ( SELECT MD.dispoNo, SV.sevkNo, M1.musteriAdi AS musteri, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SUM(T.topMetre) AS toplamSevkMetre, MD.dispoMetresi, DT.gelisMetresi, ISNULL(DT.fire, 0) AS fire, SV.sevkTarihi, DT.gelisTarihi, SP.mamulTermin, SD.miktar AS siparisMiktari, M.musteriAdi AS boyahane, MD.akisNotu AS islemler, --dbo.fnAkisIslemleri(MD.dispoNo) DT.partiNo, DT.iplikBoyaId, B.tanimAd AS BoyaTuru, MAX(HD.hamEn) AS hamEn, MAX(HD.hamGramaj) AS hamGramaj, TS.mamulEn, TS.mamulGramaj, DT.atkiCekmesi, DT.cozguCekmesi, DT.fiyat, DV.dovizCins, DT.dovizId, (SELECT CASE WHEN DT.dovizId = 2 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 2 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 3 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 3 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 1 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 1 ORDER BY tarih DESC), 2) AS numeric(18, 2)) END AS Expr1) AS ToplamTLfiyat, DT.aciklama, MD.dispoNotu, SD.siparisId, SD.siparisDetayId, DT.sqlUserName, DT.kayitTarihi, O.orguAd, 'Çözgü=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 1) AS Expr1) + ')' + ' Atki=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 2) AS Expr1) + ')' AS iplikAciklama, DT.prosesOk, dbo.[fnYikamaTalimat](SP.siparisId) yikamaTalimati FROM tblDoviz AS DV WITH(NOLOCK) INNER JOIN tblDispoTerbiye AS DT WITH(NOLOCK) INNER JOIN tblTanimlar AS B WITH(NOLOCK) ON DT.iplikBoyaId = B.tanimId AND B.tanimTurId = 2 ON DV.id = DT.dovizId RIGHT OUTER JOIN tblMusteri AS M1 WITH(NOLOCK) INNER JOIN tblSiparisDetay AS SD WITH(NOLOCK) INNER JOIN tblDispo AS MD WITH(NOLOCK) ON SD.siparisDetayId = MD.siparisDetayId INNER JOIN tblTipTur AS TT WITH(NOLOCK) ON SD.tipTurId = TT.tipTurId INNER JOIN tblSiparis AS SP WITH(NOLOCK) ON SD.siparisId = SP.siparisId ON M1.musteriNo = SP.musteriNo INNER JOIN tblTip AS TP WITH(NOLOCK) ON SD.tipTurId = TP.tipTurId AND SD.tipNo = TP.tipNo AND SD.desenNo = TP.desen AND SD.varyantNo = TP.varyant INNER JOIN tblOrgu AS O WITH(NOLOCK) ON TP.orguId = O.orguId INNER JOIN tblMusteri AS M WITH(NOLOCK) INNER JOIN tblSevkiyat AS SV WITH(NOLOCK) ON M.musteriNo = SV.musteriNo INNER JOIN tblSevkDetay AS SVD WITH(NOLOCK) ON SV.sevkNo = SVD.sevkNo ON MD.mamulDispoHamSevkno = SV.sevkNo LEFT OUTER JOIN tblTop AS T WITH(NOLOCK) INNER JOIN tblDispo AS HD WITH(NOLOCK) ON T.dispoNo = HD.dispoNo AND T.dispoTuruId = HD.dispoTuruId ON SVD.dispoTuruId = T.dispoTuruId AND SVD.dispoNo = T.dispoNo AND SVD.topNo = T.topNo AND MD.siparisDetayId = HD.siparisDetayId ON DT.dispoTuruId = MD.dispoTuruId AND DT.dispoNo = MD.dispoNo LEFT OUTER JOIN tblDispoTerbiyeTest AS TS WITH(NOLOCK) ON DT.dispoTuruId = TS.dispoTuruId AND DT.dispoNo = TS.dispoNo --WHERE DT.gelisTarihi IS NULL -- OR DT.gelisTarihi > GETDATE()-30 GROUP BY MD.dispoNo, DT.partiNo, DT.iplikBoyaId, TS.mamulEn, TS.mamulGramaj, DT.gelisMetresi, DT.gelisTarihi, DT.atkiCekmesi, DT.cozguCekmesi, DT.fire, DT.fiyat, DT.aciklama, DT.sqlUserName, DT.kayitTarihi, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SD.siparisId, SD.siparisDetayId, B.tanimAd, M.musteriAdi, M.musteriAdi, M1.musteriAdi, O.orguAd, TP.iplikAciklama, SD.miktar, MD.dispoNotu, SP.mamulTermin, DT.dovizId, DV.dovizCins, MD.dispoMetresi, MD.akisNotu, SV.sevkNo, SV.sevkTarihi, DT.prosesOk,SP.siparisId )

    Read the article

  • Internet Explorer background-color hover problem

    - by danilo
    I have a strange Problem with table formating using IE 7. My table looks like this: In IE, when using border-collapse, the borders don't get displayed correctly. That's why I used this fix: .table-vmlist td { border-top: 1px solid black; } td.col-vm-status, tr.row-details td { border-left: 1px solid black; } td.col-vm-rdp, tr.row-details td { border-right: 1px solid black; } .table-vmlist { border-bottom: 1px solid black;} When hovering over the row, it gets highlighted with CSS: .table-vmlist tr.row-vm { background-color: #A4C3EF; } .table-vmlist tr.row-vm:hover { background-color: #91BAEF; } Now, in IE 7, when moving the mouse from the top to the bottom of the list, every row gets highlighted correctly and no problems happen. But if I move my mouse pointer from the bottom of the list to the top, every second row seems to loose the border. Can someone explain what the problem is, and how to solve it? This is my markup: <tr class="row-vm"> <td class="col-vm-status status-1"><img title="Host Down" alt="Down" src="/Technik/vm-management/img/hoststatus_1.png"></td> <td class="col-vm-name">V1-VM-1</td> <td class="col-vm-stati"> <img title="Ping" alt="Ping status" src="/Technik/vm-management/img/servicestatus_3.png"> <img title="CPU" alt="CPU status" src="/Technik/vm-management/img/servicestatus_3.png"> <img title="RAM" alt="RAM status" src="/Technik/vm-management/img/servicestatus_3.png"> <img title="C:\ Diskspace" alt="Disk space status" src="/Technik/vm-management/img/servicestatus_3.png"> </td> <td class="col-vm-owner">kus</td> <td class="col-vm-purpose">Citrix Testserver</td> <td class="col-vm-ip">-</td> <td class="col-vm-uptime">-</td> <td class="col-vm-rdp">&nbsp;</td> </tr> And the CSS: /* VM-Tabelle formatieren */ .table-vmlist { border-collapse: collapse; } .table-vmlist tr { border: 1px solid black; } .table-vmlist tr.row-header { border: none; } .table-vmlist tr.row-vm { background-color: #A4C3EF; } .table-vmlist tr.row-vm:hover { background-color: #91BAEF; } .table-vmlist th { text-align: left; } .table-vmlist td { } .table-vmlist th, table td { padding: 2px 0px; } /* Spaltenbreite der VM-Tabelle festlegen */ .table-vmlist #col-status { width: 25px; } .table-vmlist #col-stati { width: 90px; } .table-vmlist #col-owner { width: 90px; } .table-vmlist #col-ip { width: 100px; } .table-vmlist #col-uptime { width: 70px; } .table-vmlist #col-rdp { width: 25px; } .table-vmlist tr.row-details td { padding: 0px 10px; } /* Kein Rahmen um verlinkte Bilder */ a img { border-width: 0px; } /* Für Einschaltknopf Hand-Cursor anstatt normalen Pfeil anzeigen */ td.status-1 img { cursor: pointer; } img.ajax-loader { margin-left: 2px; } IE fix: .table-vmlist td { border-top: 1px solid black; } td.col-vm-status, tr.row-details td { border-left: 1px solid black; } td.col-vm-rdp, tr.row-details td { border-right: 1px solid black; } .table-vmlist { border-bottom: 1px solid black;}

    Read the article

  • Using dynamic enum as type in a parameter of a method

    - by samar
    Hi Experts, What i am trying to achieve here is a bit tricky. Let me brief on a little background first before going ahead. I am aware that we can use a enum as a type to a parameter of a method. For example I can do something like this (a very basic example) namespace Test { class DefineEnums { public enum MyEnum { value1 = 0, value2 = 1 } } class UseEnums { public void UseDefinedEnums(DefineEnums.MyEnum _enum) { //Any code here. } public void Test() { // "value1" comes here with the intellisense. UseDefinedEnums(DefineEnums.MyEnum.value1); } } } What i need to do is create a dynamic Enum and use that as type in place of DefineEnums.MyEnum mentioned above. I tried the following. 1. Used a method which i got from the net to create a dynamic enum from a list of strings. And created a static class which i can use. using System; using System.Collections.Generic; using System.Reflection; using System.Reflection.Emit; namespace Test { public static class DynamicEnum { public static Enum finished; static List<string> _lst = new List<string>(); static DynamicEnum() { _lst.Add("value1"); _lst.Add("value2"); finished = CreateDynamicEnum(_lst); } public static Enum CreateDynamicEnum(List<string> _list) { // Get the current application domain for the current thread. AppDomain currentDomain = AppDomain.CurrentDomain; // Create a dynamic assembly in the current application domain, // and allow it to be executed and saved to disk. AssemblyName aName = new AssemblyName("TempAssembly"); AssemblyBuilder ab = currentDomain.DefineDynamicAssembly( aName, AssemblyBuilderAccess.RunAndSave); // Define a dynamic module in "TempAssembly" assembly. For a single- // module assembly, the module has the same name as the assembly. ModuleBuilder mb = ab.DefineDynamicModule(aName.Name, aName.Name + ".dll"); // Define a public enumeration with the name "Elevation" and an // underlying type of Integer. EnumBuilder eb = mb.DefineEnum("Elevation", TypeAttributes.Public, typeof(int)); // Define two members, "High" and "Low". //eb.DefineLiteral("Low", 0); //eb.DefineLiteral("High", 1); int i = 0; foreach (string item in _list) { eb.DefineLiteral(item, i); i++; } // Create the type and save the assembly. return (Enum)Activator.CreateInstance(eb.CreateType()); //ab.Save(aName.Name + ".dll"); } } } Tried using the class but i am unable to find the "finished" enum defined above. i.e. I am not able to do the following public static void TestDynEnum(Test.DynamicEnum.finished _finished) { // Do anything here with _finished. } I guess the post has become too long but i hope i have made it quite clear. Please help! Thanks in advance! Regards, Samar

    Read the article

  • how use graphvis in C# application?

    - by ghasedak -
    i have a dot format file and i download graphvis for windows now how can i use graphvis to show a graph in my c# application? using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using VDS.RDF; using VDS.RDF.Parsing; using VDS.RDF.Query; using System.IO; using System.Windows; using System.Runtime.InteropServices; using VDS.RDF.Writing; using System.Diagnostics; namespace WindowsFormsApplication2 { public partial class first : Form { Graph g = new Graph(); string s1 = null; /**************************************DATA********************************************/ public first() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { Stream myStream = null; var parser = new Notation3Parser(); var graph = new Graph(); OpenFileDialog openFileDialog1 = new OpenFileDialog(); openFileDialog1.Filter = "RDF files (*.n3)|*.n3"; openFileDialog1.FilterIndex = 1; openFileDialog1.RestoreDirectory = true; openFileDialog1.Multiselect = false; if (openFileDialog1.ShowDialog() == DialogResult.OK) { try { if ((myStream = openFileDialog1.OpenFile()) != null) { using (myStream) { string s = openFileDialog1.FileName.ToString(); string w= Directory.GetCurrentDirectory().ToString(); string Fname = openFileDialog1.SafeFileName.ToString(); File.Copy(s,Path.Combine(w,Fname),true); // Insert code to read the stream here. Win32.AllocConsole(); s1 = Path.Combine(w, Fname); insertNodeButton.Visible = true; delBut.Visible = true; simi.Visible = true; showNodes showNodes1 = new showNodes(s1); g = showNodes1.returngraph(); Console.Read(); Win32.FreeConsole(); // g.SaveToFile("firstfile.n3"); this.Show(); } } } catch (Exception ex) { MessageBox.Show("Error: Could not read file from disk. Original error: " + ex.Message); } } GraphVizWriter hi = new GraphVizWriter(); hi.Save(g, "c:\\ahmad.dot"); } private void button2_Click(object sender, EventArgs e) { string strCmdLine = Application.StartupPath + "\\Tools\\rdfEditor.exe"; //string strCmdLine = "DIR"; MessageBox.Show(strCmdLine); System.Diagnostics.Process process1; process1 = new System.Diagnostics.Process(); //Do not receive an event when the process exits. process1.EnableRaisingEvents = false; //The "/C" Tells Windows to Run The Command then Terminate System.Diagnostics.Process.Start(strCmdLine); process1.Close(); } private void Form1_Load(object sender, EventArgs e) { } private void insertNodeButton_Click(object sender, EventArgs e) { //Graph parentvalue = this.g; //String parentvalueadress = this.s1; addTriple a1 = new addTriple(); a1.G = g; a1.BringToFront(); a1.ShowDialog(); g = a1.G; g.SaveToFile("c:\\Hi.n3"); } this is my code i want to visul saly dot file format to show a graph with graphvis

    Read the article

  • Override `drop` for a custom sequence

    - by Bruno Reis
    In short: in Clojure, is there a way to redefine a function from the standard sequence API (which is not defined on any interface like ISeq, IndexedSeq, etc) on a custom sequence type I wrote? 1. Huge data files I have big files in the following format: A long (8 bytes) containing the number n of entries n entries, each one being composed of 3 longs (ie, 24 bytes) 2. Custom sequence I want to have a sequence on these entries. Since I cannot usually hold all the data in memory at once, and I want fast sequential access on it, I wrote a class similar to the following: (deftype DataSeq [id ^long cnt ^long i cached-seq] clojure.lang.IndexedSeq (index [_] i) (count [_] (- cnt i)) (seq [this] this) (first [_] (first cached-seq)) (more [this] (if-let [s (next this)] s '())) (next [_] (if (not= (inc i) cnt) (if (next cached-seq) (DataSeq. id cnt (inc i) (next cached-seq)) (DataSeq. id cnt (inc i) (with-open [f (open-data-file id)] ; open a memory mapped byte array on the file ; seek to the exact position to begin reading ; decide on an optimal amount of data to read ; eagerly read and return that amount of data )))))) The main idea is to read ahead a bunch of entries in a list and then consume from that list. Whenever the cache is completely consumed, if there are remaining entries, they are read from the file in a new cache list. Simple as that. To create an instance of such a sequence, I use a very simple function like: (defn ^DataSeq load-data [id] (next (DataSeq. id (count-entries id) -1 []))) ; count-entries is a trivial "open file and read a long" memoized As you can see, the format of the data allowed me to implement count in very simply and efficiently. 3. drop could be O(1) In the same spirit, I'd like to reimplement drop. The format of these data files allows me to reimplement drop in O(1) (instead of the standard O(n)), as follows: if dropping less then the remaining cached items, just drop the same amount from the cache and done; if dropping more than cnt, then just return the empty list. otherwise, just figure out the position in the data file, jump right into that position, and read data from there. My difficulty is that drop is not implemented in the same way as count, first, seq, etc. The latter functions call a similarly named static method in RT which, in turn, calls my implementation above, while the former, drop, does not check if the instance of the sequence it is being called on provides a custom implementation. Obviously, I could provide a function named anything but drop that does exactly what I want, but that would force other people (including my future self) to remember to use it instead of drop every single time, which sucks. So, the question is: is it possible to override the default behaviour of drop? 4. A workaround (I dislike) While writing this question, I've just figured out a possible workaround: make the reading even lazier. The custom sequence would just keep an index and postpone the reading operation, that would happen only when first was called. The problem is that I'd need some mutable state: the first call to first would cause some data to be read into a cache, all the subsequent calls would return data from this cache. There would be a similar logic on next: if there's a cache, just next it; otherwise, don't bother populating it -- it will be done when first is called again. This would avoid unnecessary disk reads. However, this is still less than optimal -- it is still O(n), and it could easily be O(1). Anyways, I don't like this workaround, and my question is still open. Any thoughts? Thanks.

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • Why is Java EE 6 better than Spring ?

    - by arungupta
    Java EE 6 was released over 2 years ago and now there are 14 compliant application servers. In all my talks around the world, a question that is frequently asked is Why should I use Java EE 6 instead of Spring ? There are already several blogs covering that topic: Java EE wins over Spring by Bill Burke Why will I use Java EE instead of Spring in new Enterprise Java projects in 2012 ? by Kai Waehner (more discussion on TSS) Spring to Java EE migration (Part 1 and 2, 3 and 4 coming as well) by David Heffelfinger Spring to Java EE - A Migration Experience by Lincoln Baxter Migrating Spring to Java EE 6 by Bert Ertman and Paul Bakker at NLJUG Moving from Spring to Java EE 6 - The Age of Frameworks is Over at TSS Java EE vs Spring Shootout by Rohit Kelapure and Reza Rehman at JavaOne 2011 Java EE 6 and the Ewoks by Murat Yener Definite excuse to avoid Spring forever - Bert Ertman and Arun Gupta I will try to share my perspective in this blog. First of all, I'd like to start with a note: Thank you Spring framework for filling the interim gap and providing functionality that is now included in the mainstream Java EE 6 application servers. The Java EE platform has evolved over the years learning from frameworks like Spring and provides all the functionality to build an enterprise application. Thank you very much Spring framework! While Spring was revolutionary in its time and is still very popular and quite main stream in the same way Struts was circa 2003, it really is last generation's framework - some people are even calling it legacy. However my theory is "code is king". So my approach is to build/take a simple Hello World CRUD application in Java EE 6 and Spring and compare the deployable artifacts. I started looking at the official tutorial Developing a Spring Framework MVC Application Step-by-Step but it is using the older version 2.5. I wasn't able to find any updated version in the current 3.1 release. Next, I downloaded Spring Tool Suite and thought that would provide some template samples to get started. A least a quick search did not show any handy tutorials - either video or text-based. So I searched and found a link to their SVN repository at src.springframework.org/svn/spring-samples/. I tried the "mvc-basic" sample and the generated WAR file was 4.43 MB. While it was named a "basic" sample it seemed to come with 19 different libraries bundled but it was what I could find: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/joda-time-jsptags-1.0.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar And it is not even using any database! The app deployed fine on GlassFish 3.1.2 but the "@Controller Example" link did not work as it was missing the context root. With a bit of tweaking I could deploy the application and assume that the account got created because no error was displayed in the browser or server log. Next I generated the WAR for "mvc-ajax" and the 5.1 MB WAR had 20 JARs (1 removed, 2 added): ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.6.4.jar./WEB-INF/lib/jackson-mapper-asl-1.6.4.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar 2 more JARs for just doing Ajax. Anyway, deploying this application gave the following error: Caused by: java.lang.NoSuchMethodError: org.codehaus.jackson.map.SerializationConfig.<init>(Lorg/codehaus/jackson/map/ClassIntrospector;Lorg/codehaus/jackson/map/AnnotationIntrospector;Lorg/codehaus/jackson/map/introspect/VisibilityChecker;Lorg/codehaus/jackson/map/jsontype/SubtypeResolver;)V    at org.springframework.samples.mvc.ajax.json.ConversionServiceAwareObjectMapper.<init>(ConversionServiceAwareObjectMapper.java:20)    at org.springframework.samples.mvc.ajax.json.JacksonConversionServiceConfigurer.postProcessAfterInitialization(JacksonConversionServiceConfigurer.java:40)    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:407) Seems like some incorrect repos in the "pom.xml". Next one is "mvc-showcase" and the 6.49 MB WAR now has 28 JARs as shown below: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/aspectjrt-1.6.10.jar./WEB-INF/lib/commons-fileupload-1.2.2.jar./WEB-INF/lib/commons-io-2.0.1.jar./WEB-INF/lib/el-api-2.2.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.8.1.jar./WEB-INF/lib/jackson-mapper-asl-1.8.1.jar./WEB-INF/lib/javax.inject-1.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/jdom-1.0.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-api-1.2.jar./WEB-INF/lib/jstl-impl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/rome-1.0.0.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.1.0.RELEASE.jar./WEB-INF/lib/spring-asm-3.1.0.RELEASE.jar./WEB-INF/lib/spring-beans-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-support-3.1.0.RELEASE.jar./WEB-INF/lib/spring-core-3.1.0.RELEASE.jar./WEB-INF/lib/spring-expression-3.1.0.RELEASE.jar./WEB-INF/lib/spring-web-3.1.0.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.1.0.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar The app at least deployed and showed results this time. But still no database! Next I tried building "jpetstore" and got the error: [ERROR] Failed to execute goal on project org.springframework.samples.jpetstore:Could not resolve dependencies for project org.springframework.samples:org.springframework.samples.jpetstore:war:1.0.0-SNAPSHOT: Failed to collect dependencies for [commons-fileupload:commons-fileupload:jar:1.2.1 (compile), org.apache.struts:com.springsource.org.apache.struts:jar:1.2.9 (compile), javax.xml.rpc:com.springsource.javax.xml.rpc:jar:1.1.0 (compile), org.apache.commons:com.springsource.org.apache.commons.dbcp:jar:1.2.2.osgi (compile), commons-io:commons-io:jar:1.3.2 (compile), hsqldb:hsqldb:jar:1.8.0.7 (compile), org.apache.tiles:tiles-core:jar:2.2.0 (compile), org.apache.tiles:tiles-jsp:jar:2.2.0 (compile), org.tuckey:urlrewritefilter:jar:3.1.0 (compile), org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-orm:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-context-support:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework.webflow:spring-js:jar:2.0.7.RELEASE (compile), org.apache.ibatis:com.springsource.com.ibatis:jar:2.3.4.726 (runtime), com.caucho:com.springsource.com.caucho:jar:3.2.1 (compile), org.apache.axis:com.springsource.org.apache.axis:jar:1.4.0 (compile), javax.wsdl:com.springsource.javax.wsdl:jar:1.6.1 (compile), javax.servlet:jstl:jar:1.2 (runtime), org.aspectj:aspectjweaver:jar:1.6.5 (compile), javax.servlet:servlet-api:jar:2.5 (provided), javax.servlet.jsp:jsp-api:jar:2.1 (provided), junit:junit:jar:4.6 (test)]: Failed to read artifact descriptor for org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT: Could not transfer artifact org.springframework:spring-webmvc:pom:3.0.0.BUILD-SNAPSHOT from/to JBoss repository (http://repository.jboss.com/maven2): Access denied to: http://repository.jboss.com/maven2/org/springframework/spring-webmvc/3.0.0.BUILD-SNAPSHOT/spring-webmvc-3.0.0.BUILD-SNAPSHOT.pom It appears the sample is broken - maybe I was pulling from the wrong repository - would be great if someone were to point me at a good target to use here. With a 50% hit on samples in this repository, I started searching through numerous blogs, most of which have either outdated information (using XML-heavy Spring 2.5), some piece of configuration (which is a typical "feature" of Spring) is missing, or too much complexity in the sample. I finally found this blog that worked like a charm. This blog creates a trivial Spring MVC 3 application using Hibernate and MySQL. This application performs CRUD operations on a single table in a database using typical Spring technologies.  I downloaded the sample code from the blog, deployed it on GlassFish 3.1.2 and could CRUD the "person" entity. The source code for this application can be downloaded here. More details on the application statistics below. And then I built a similar CRUD application in Java EE 6 using NetBeans wizards in a couple of minutes. The source code for the application can be downloaded here and the WAR here. The Spring Source Tool Suite may also offer similar wizard-driven capabilities but this blog focus primarily on comparing the runtimes. The lack of STS tutorials was slightly disappointing as well. NetBeans however has tons of text-based and video tutorials and tons of material even by the community. One more bit on the download size of tools bundle ... NetBeans 7.1.1 "All" is 211 MB (which includes GlassFish and Tomcat) Spring Tool Suite  2.9.0 is 347 MB (~ 65% bigger) This blog is not about the tooling comparison so back to the Java EE 6 version of the application .... In order to run the Java EE version on GlassFish, copy the MySQL Connector/J to glassfish3/glassfish/domains/domain1/lib/ext directory and create a JDBC connection pool and JDBC resource as: ./bin/asadmin create-jdbc-connection-pool --datasourceclassname \\ com.mysql.jdbc.jdbc2.optional.MysqlDataSource --restype \\ javax.sql.DataSource --property \\ portNumber=3306:user=mysql:password=mysql:databaseName=mydatabase \\ myConnectionPool ./bin/asadmin create-jdbc-resource --connectionpoolid myConnectionPool jdbc/myDataSource I generated WARs for the two projects and the table below highlights some differences between them: Java EE 6 Spring WAR File Size 0.021030 MB 10.87 MB (~516x) Number of files 20 53 (> 2.5x) Bundled libraries 0 36 Total size of libraries 0 12.1 MB XML files 3 5 LoC in XML files 50 (11 + 15 + 24) 129 (27 + 46 + 16 + 11 + 19) (~ 2.5x) Total .properties files 1 Bundle.properties 2 spring.properties, log4j.properties Cold Deploy 5,339 ms 11,724 ms Second Deploy 481 ms 6,261 ms Third Deploy 528 ms 5,484 ms Fourth Deploy 484 ms 5,576 ms Runtime memory ~73 MB ~101 MB Some points worth highlighting from the table ... 516x WAR file, 10x deployment time - With 12.1 MB of libraries (for a very basic application) bundled in your application, the WAR file size and the deployment time will naturally go higher. The WAR file for Spring-based application is 516x bigger and the deployment time is double during the first deployment and ~ 10x during subsequent deployments. The Java EE 6 application is fully portable and will run on any Java EE 6 compliant application server. 36 libraries in the WAR - There are 14 Java EE 6 compliant application servers today. Each of those servers provide all the functionality like transactions, dependency injection, security, persistence, etc typically required of an enterprise or web application. There is no need to bundle 36 libraries worth 12.1 MB for a trivial CRUD application. These 14 compliant application servers provide all the functionality baked in. Now you can also deploy these libraries in the container but then you don't get the "portability" offered by Spring in that case. Does your typical Spring deployment actually do that ? 3x LoC in XML - The number of XML files is about 1.6x and the LoC is ~ 2.5x. So much XML seems circa 2003 when the Java language had no annotations. The XML files can be further reduced, e.g. faces-config.xml can be replaced without providing i18n, but I just want to compare stock applications. Memory usage - Both the applications were deployed on default GlassFish 3.1.2 installation and any additional memory consumed as part of deployment/access was attributed to the application. This is by no means scientific but at least provides an initial ballpark. This area definitely needs more investigation. Another table that compares typical Java EE 6 compliant application servers and the custom-stack created for a Spring application ... Java EE 6 Spring Web Container ? 53 MB (tcServer 2.6.3 Developer Edition) Security ? 12 MB (Spring Security 3.1.0) Persistence ? 6.3 MB (Hibernate 4.1.0, required) Dependency Injection ? 5.3 MB (Framework) Web Services ? 796 KB (Spring WS 2.0.4) Messaging ? 3.4 MB (RabbitMQ Server 2.7.1) 936 KB (Java client 936) OSGi ? 1.3 MB (Spring OSGi 1.2.1) GlassFish and WebLogic (starting at 33 MB) 83.3 MB There are differentiating factors on both the stacks. But most of the functionality like security, persistence, and dependency injection is baked in a Java EE 6 compliant application server but needs to be individually managed and patched for a Spring application. This very quickly leads to a "stack explosion". The Java EE 6 servers are tested extensively on a variety of platforms in different combinations whereas a Spring application developer is responsible for testing with different JDKs, Operating Systems, Versions, Patches, etc. Oracle has both the leading OSS lightweight server with GlassFish and the leading enterprise Java server with WebLogic Server, both Java EE 6 and both with lightweight deployment options. The Web Container offered as part of a Java EE 6 application server not only deploys your enterprise Java applications but also provide operational management, diagnostics, and mission-critical capabilities required by your applications. The Java EE 6 platform also introduced the Web Profile which is a subset of the specifications from the entire platform. It is targeted at developers of modern web applications offering a reasonably complete stack, composed of standard APIs, and is capable out-of-the-box of addressing the needs of a large class of Web applications. As your applications grow, the stack can grow to the full Java EE 6 platform. The GlassFish Server Web Profile starting at 33MB (smaller than just the non-standard tcServer) provides most of the functionality typically required by a web application. WebLogic provides battle-tested functionality for a high throughput, low latency, and enterprise grade web application. No individual managing or patching, all tested and commercially supported for you! Note that VMWare does have a server, tcServer, but it is non-standard and not even certified to the level of the standard Web Profile most customers expect these days. Customers who choose this risk proprietary lock-in since VMWare does not seem to want to formally certify with either Java EE 6 Enterprise Platform or with Java EE 6 Web Profile but of course it would be great if they were to join the community and help their customers reduce the risk of deploying on VMWare software. Some more points to help you decide choose between Java EE 6 and Spring ... Freedom to choose container - There are 14 Java EE 6 compliant application servers today, with a variety of open source and commercial offerings. A Java EE 6 application can be deployed on any of those containers. So if you deployed your application on GlassFish today and would like to scale up with your demands then you can deploy the same application to WebLogic. And because of the portability of a Java EE 6 application, you can even take it a different vendor altogether. Spring requires a runtime which could be any of these app servers as well. But why use Spring when all the required functionality is already baked into the application server itself ? Spring also has a different definition of portability where they claim to bundle all the libraries in the WAR file and move to any application server. But we saw earlier how bloated that archive could be. The equivalent features in Spring runtime offerings (mainly tcServer) are not all open source, not as mature, and often require manual assembly.  Vendor choice - The Java EE 6 platform is created using the Java Community Process where all the big players like Oracle, IBM, RedHat, and Apache are conritbuting to make the platform successful. Each application server provides the basic Java EE 6 platform compliance and has its own competitive offerings. This allows you to choose an application server for deploying your Java EE 6 applications. If you are not happy with the support or feature of one vendor then you can move your application to a different vendor because of the portability promise offered by the platform. Spring is a set of products from a single company, one price book, one support organization, one sustaining organization, one sales organization, etc. If any of those cause a customer headache, where do you go ? Java EE, backed by multiple vendors, is a safer bet for those that are risk averse. Production support - With Spring, typically you need to get support from two vendors - VMWare and the container provider. With Java EE 6, all of this is typically provided by one vendor. For example, Oracle offers commercial support from systems, operating systems, JDK, application server, and applications on top of them. VMWare certainly offers complete production support but do you really want to put all your eggs in one basket ? Do you really use tcServer ? ;-) Maintainability - With Spring, you are likely building your own distribution with multiple JAR files, integrating, patching, versioning, etc of all those components. Spring's claim is that multiple JAR files allow you to go à la carte and pick the latest versions of different components. But who is responsible for testing whether all these versions work together ? Yep, you got it, its YOU! If something does not work, who patches and maintains the JARs ? Of course, you! Commercial support for such a configuration ? On your own! The Java EE application servers manage all of this for you and provide a well-tested and commercially supported bundle. While it is always good to realize that there is something new and improved that updates and replaces older frameworks like Spring, the good news is not only does a Java EE 6 container offer what is described here, most also will let you deploy and run your Spring applications on them while you go through an upgrade to a more modern architecture. End result, you get the best of both worlds - keeping your legacy investment but moving to a more agile, lightweight world of Java EE 6. A message to the Spring lovers ... The complexity in J2EE 1.2, 1.3, and 1.4 led to the genesis of Spring but that was in 2004. This is 2012 and the name has changed to "Java EE 6" :-) There are tons of improvements in the Java EE platform to make it easy-to-use and powerful. Some examples: Adding @Stateless on a POJO makes it an EJB EJBs can be packaged in a WAR with no special packaging or deployment descriptors "web.xml" and "faces-config.xml" are optional in most of the common cases Typesafe dependency injection is now part of the Java EE platform Add @Path on a POJO allows you to publish it as a RESTful resource EJBs can be used as backing beans for Facelets-driven JSF pages providing full MVC Java EE 6 WARs are known to be kilobytes in size and deployed in milliseconds Tons of other simplifications in the platform and application servers So if you moved away from J2EE to Spring many years ago and have not looked at Java EE 6 (which has been out since Dec 2009) then you should definitely try it out. Just be at least aware of what other alternatives are available instead of restricting yourself to one stack. Here are some workshops and screencasts worth trying: screencast #37 shows how to build an end-to-end application using NetBeans screencast #36 builds the same application using Eclipse javaee-lab-feb2012.pdf is a 3-4 hours self-paced hands-on workshop that guides you to build a comprehensive Java EE 6 application using NetBeans Each city generally has a "spring cleanup" program every year. It allows you to clean up the mess from your house. For your software projects, you don't need to wait for an annual event, just get started and reduce the technical debt now! Move away from your legacy Spring-based applications to a lighter and more modern approach of building enterprise Java applications using Java EE 6. Watch this beautiful presentation that explains how to migrate from Spring -> Java EE 6: List of files in the Java EE 6 project: ./index.xhtml./META-INF./person./person/Create.xhtml./person/Edit.xhtml./person/List.xhtml./person/View.xhtml./resources./resources/css./resources/css/jsfcrud.css./template.xhtml./WEB-INF./WEB-INF/classes./WEB-INF/classes/Bundle.properties./WEB-INF/classes/META-INF./WEB-INF/classes/META-INF/persistence.xml./WEB-INF/classes/org./WEB-INF/classes/org/javaee./WEB-INF/classes/org/javaee/javaeemysql./WEB-INF/classes/org/javaee/javaeemysql/AbstractFacade.class./WEB-INF/classes/org/javaee/javaeemysql/Person.class./WEB-INF/classes/org/javaee/javaeemysql/Person_.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$1.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$PersonControllerConverter.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController.class./WEB-INF/classes/org/javaee/javaeemysql/PersonFacade.class./WEB-INF/classes/org/javaee/javaeemysql/util./WEB-INF/classes/org/javaee/javaeemysql/util/JsfUtil.class./WEB-INF/classes/org/javaee/javaeemysql/util/PaginationHelper.class./WEB-INF/faces-config.xml./WEB-INF/web.xml List of files in the Spring 3.x project: ./META-INF ./META-INF/MANIFEST.MF./WEB-INF./WEB-INF/applicationContext.xml./WEB-INF/classes./WEB-INF/classes/log4j.properties./WEB-INF/classes/org./WEB-INF/classes/org/krams ./WEB-INF/classes/org/krams/tutorial ./WEB-INF/classes/org/krams/tutorial/controller ./WEB-INF/classes/org/krams/tutorial/controller/MainController.class ./WEB-INF/classes/org/krams/tutorial/domain ./WEB-INF/classes/org/krams/tutorial/domain/Person.class ./WEB-INF/classes/org/krams/tutorial/service ./WEB-INF/classes/org/krams/tutorial/service/PersonService.class ./WEB-INF/hibernate-context.xml ./WEB-INF/hibernate.cfg.xml ./WEB-INF/jsp ./WEB-INF/jsp/addedpage.jsp ./WEB-INF/jsp/addpage.jsp ./WEB-INF/jsp/deletedpage.jsp ./WEB-INF/jsp/editedpage.jsp ./WEB-INF/jsp/editpage.jsp ./WEB-INF/jsp/personspage.jsp ./WEB-INF/lib ./WEB-INF/lib/antlr-2.7.6.jar ./WEB-INF/lib/aopalliance-1.0.jar ./WEB-INF/lib/c3p0-0.9.1.2.jar ./WEB-INF/lib/cglib-nodep-2.2.jar ./WEB-INF/lib/commons-beanutils-1.8.3.jar ./WEB-INF/lib/commons-collections-3.2.1.jar ./WEB-INF/lib/commons-digester-2.1.jar ./WEB-INF/lib/commons-logging-1.1.1.jar ./WEB-INF/lib/dom4j-1.6.1.jar ./WEB-INF/lib/ejb3-persistence-1.0.2.GA.jar ./WEB-INF/lib/hibernate-annotations-3.4.0.GA.jar ./WEB-INF/lib/hibernate-commons-annotations-3.1.0.GA.jar ./WEB-INF/lib/hibernate-core-3.3.2.GA.jar ./WEB-INF/lib/javassist-3.7.ga.jar ./WEB-INF/lib/jstl-1.1.2.jar ./WEB-INF/lib/jta-1.1.jar ./WEB-INF/lib/junit-4.8.1.jar ./WEB-INF/lib/log4j-1.2.14.jar ./WEB-INF/lib/mysql-connector-java-5.1.14.jar ./WEB-INF/lib/persistence-api-1.0.jar ./WEB-INF/lib/slf4j-api-1.6.1.jar ./WEB-INF/lib/slf4j-log4j12-1.6.1.jar ./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-jdbc-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-orm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-tx-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar ./WEB-INF/lib/standard-1.1.2.jar ./WEB-INF/lib/xml-apis-1.0.b2.jar ./WEB-INF/spring-servlet.xml ./WEB-INF/spring.properties ./WEB-INF/web.xml So, are you excited about Java EE 6 ? Want to get started now ? Here are some resources: Java EE 6 SDK (including runtime, samples, tutorials etc) GlassFish Server Open Source Edition 3.1.2 (Community) Oracle GlassFish Server 3.1.2 (Commercial) Java EE 6 using WebLogic 12c and NetBeans (Video) Java EE 6 with NetBeans and GlassFish (Video) Java EE with Eclipse and GlassFish (Video)

    Read the article

  • SimpleMembership, Membership Providers, Universal Providers and the new ASP.NET 4.5 Web Forms and ASP.NET MVC 4 templates

    - by Jon Galloway
    The ASP.NET MVC 4 Internet template adds some new, very useful features which are built on top of SimpleMembership. These changes add some great features, like a much simpler and extensible membership API and support for OAuth. However, the new account management features require SimpleMembership and won't work against existing ASP.NET Membership Providers. I'll start with a summary of top things you need to know, then dig into a lot more detail. Summary: SimpleMembership has been designed as a replacement for traditional the previous ASP.NET Role and Membership provider system SimpleMembership solves common problems people ran into with the Membership provider system and was designed for modern user / membership / storage needs SimpleMembership integrates with the previous membership system, but you can't use a MembershipProvider with SimpleMembership The new ASP.NET MVC 4 Internet application template AccountController requires SimpleMembership and is not compatible with previous MembershipProviders You can continue to use existing ASP.NET Role and Membership providers in ASP.NET 4.5 and ASP.NET MVC 4 - just not with the ASP.NET MVC 4 AccountController The existing ASP.NET Role and Membership provider system remains supported as is part of the ASP.NET core ASP.NET 4.5 Web Forms does not use SimpleMembership; it implements OAuth on top of ASP.NET Membership The ASP.NET Web Site Administration Tool (WSAT) is not compatible with SimpleMembership The following is the result of a few conversations with Erik Porter (PM for ASP.NET MVC) to make sure I had some the overall details straight, combined with a lot of time digging around in ILSpy and Visual Studio's assembly browsing tools. SimpleMembership: The future of membership for ASP.NET The ASP.NET Membership system was introduces with ASP.NET 2.0 back in 2005. It was designed to solve common site membership requirements at the time, which generally involved username / password based registration and profile storage in SQL Server. It was designed with a few extensibility mechanisms - notably a provider system (which allowed you override some specifics like backing storage) and the ability to store additional profile information (although the additional  profile information was packed into a single column which usually required access through the API). While it's sometimes frustrating to work with, it's held up for seven years - probably since it handles the main use case (username / password based membership in a SQL Server database) smoothly and can be adapted to most other needs (again, often frustrating, but it can work). The ASP.NET Web Pages and WebMatrix efforts allowed the team an opportunity to take a new look at a lot of things - e.g. the Razor syntax started with ASP.NET Web Pages, not ASP.NET MVC. The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership. As Matthew Osborn said in his post Using SimpleMembership With ASP.NET WebPages: With the introduction of ASP.NET WebPages and the WebMatrix stack our team has really be focusing on making things simpler for the developer. Based on a lot of customer feedback one of the areas that we wanted to improve was the built in security in ASP.NET. So with this release we took that time to create a new built in (and default for ASP.NET WebPages) security provider. I say provider because the new stuff is still built on the existing ASP.NET framework. So what do we call this new hotness that we have created? Well, none other than SimpleMembership. SimpleMembership is an umbrella term for both SimpleMembership and SimpleRoles. Part of simplifying membership involved fixing some common problems with ASP.NET Membership. Problems with ASP.NET Membership ASP.NET Membership was very obviously designed around a set of assumptions: Users and user information would most likely be stored in a full SQL Server database or in Active Directory User and profile information would be optimized around a set of common attributes (UserName, Password, IsApproved, CreationDate, Comment, Role membership...) and other user profile information would be accessed through a profile provider Some problems fall out of these assumptions. Requires Full SQL Server for default cases The default, and most fully featured providers ASP.NET Membership providers (SQL Membership Provider, SQL Role Provider, SQL Profile Provider) require full SQL Server. They depend on stored procedure support, and they rely on SQL Server cache dependencies, they depend on agents for clean up and maintenance. So the main SQL Server based providers don't work well on SQL Server CE, won't work out of the box on SQL Azure, etc. Note: Cory Fowler recently let me know about these Updated ASP.net scripts for use with Microsoft SQL Azure which do support membership, personalization, profile, and roles. But the fact that we need a support page with a set of separate SQL scripts underscores the underlying problem. Aha, you say! Jon's forgetting the Universal Providers, a.k.a. System.Web.Providers! Hold on a bit, we'll get to those... Custom Membership Providers have to work with a SQL-Server-centric API If you want to work with another database or other membership storage system, you need to to inherit from the provider base classes and override a bunch of methods which are tightly focused on storing a MembershipUser in a relational database. It can be done (and you can often find pretty good ones that have already been written), but it's a good amount of work and often leaves you with ugly code that has a bunch of System.NotImplementedException fun since there are a lot of methods that just don't apply. Designed around a specific view of users, roles and profiles The existing providers are focused on traditional membership - a user has a username and a password, some specific roles on the site (e.g. administrator, premium user), and may have some additional "nice to have" optional information that can be accessed via an API in your application. This doesn't fit well with some modern usage patterns: In OAuth and OpenID, the user doesn't have a password Often these kinds of scenarios map better to user claims or rights instead of monolithic user roles For many sites, profile or other non-traditional information is very important and needs to come from somewhere other than an API call that maps to a database blob What would work a lot better here is a system in which you were able to define your users, rights, and other attributes however you wanted and the membership system worked with your model - not the other way around. Requires specific schema, overflow in blob columns I've already mentioned this a few times, but it bears calling out separately - ASP.NET Membership focuses on SQL Server storage, and that storage is based on a very specific database schema. SimpleMembership as a better membership system As you might have guessed, SimpleMembership was designed to address the above problems. Works with your Schema As Matthew Osborn explains in his Using SimpleMembership With ASP.NET WebPages post, SimpleMembership is designed to integrate with your database schema: All SimpleMembership requires is that there are two columns on your users table so that we can hook up to it – an “ID” column and a “username” column. The important part here is that they can be named whatever you want. For instance username doesn't have to be an alias it could be an email column you just have to tell SimpleMembership to treat that as the “username” used to log in. Matthew's example shows using a very simple user table named Users (it could be named anything) with a UserID and Username column, then a bunch of other columns he wanted in his app. Then we point SimpleMemberhip at that table with a one-liner: WebSecurity.InitializeDatabaseFile("SecurityDemo.sdf", "Users", "UserID", "Username", true); No other tables are needed, the table can be named anything we want, and can have pretty much any schema we want as long as we've got an ID and something that we can map to a username. Broaden database support to the whole SQL Server family While SimpleMembership is not database agnostic, it works across the SQL Server family. It continues to support full SQL Server, but it also works with SQL Azure, SQL Server CE, SQL Server Express, and LocalDB. Everything's implemented as SQL calls rather than requiring stored procedures, views, agents, and change notifications. Note that SimpleMembership still requires some flavor of SQL Server - it won't work with MySQL, NoSQL databases, etc. You can take a look at the code in WebMatrix.WebData.dll using a tool like ILSpy if you'd like to see why - there places where SQL Server specific SQL statements are being executed, especially when creating and initializing tables. It seems like you might be able to work with another database if you created the tables separately, but I haven't tried it and it's not supported at this point. Note: I'm thinking it would be possible for SimpleMembership (or something compatible) to run Entity Framework so it would work with any database EF supports. That seems useful to me - thoughts? Note: SimpleMembership has the same database support - anything in the SQL Server family - that Universal Providers brings to the ASP.NET Membership system. Easy to with Entity Framework Code First The problem with with ASP.NET Membership's system for storing additional account information is that it's the gate keeper. That means you're stuck with its schema and accessing profile information through its API. SimpleMembership flips that around by allowing you to use any table as a user store. That means you're in control of the user profile information, and you can access it however you'd like - it's just data. Let's look at a practical based on the AccountModel.cs class in an ASP.NET MVC 4 Internet project. Here I'm adding a Birthday property to the UserProfile class. [Table("UserProfile")] public class UserProfile { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int UserId { get; set; } public string UserName { get; set; } public DateTime Birthday { get; set; } } Now if I want to access that information, I can just grab the account by username and read the value. var context = new UsersContext(); var username = User.Identity.Name; var user = context.UserProfiles.SingleOrDefault(u => u.UserName == username); var birthday = user.Birthday; So instead of thinking of SimpleMembership as a big membership API, think of it as something that handles membership based on your user database. In SimpleMembership, everything's keyed off a user row in a table you define rather than a bunch of entries in membership tables that were out of your control. How SimpleMembership integrates with ASP.NET Membership Okay, enough sales pitch (and hopefully background) on why things have changed. How does this affect you? Let's start with a diagram to show the relationship (note: I've simplified by removing a few classes to show the important relationships): So SimpleMembershipProvider is an implementaiton of an ExtendedMembershipProvider, which inherits from MembershipProvider and adds some other account / OAuth related things. Here's what ExtendedMembershipProvider adds to MembershipProvider: The important thing to take away here is that a SimpleMembershipProvider is a MembershipProvider, but a MembershipProvider is not a SimpleMembershipProvider. This distinction is important in practice: you cannot use an existing MembershipProvider (including the Universal Providers found in System.Web.Providers) with an API that requires a SimpleMembershipProvider, including any of the calls in WebMatrix.WebData.WebSecurity or Microsoft.Web.WebPages.OAuth.OAuthWebSecurity. However, that's as far as it goes. Membership Providers still work if you're accessing them through the standard Membership API, and all of the core stuff  - including the AuthorizeAttribute, role enforcement, etc. - will work just fine and without any change. Let's look at how that affects you in terms of the new templates. Membership in the ASP.NET MVC 4 project templates ASP.NET MVC 4 offers six Project Templates: Empty - Really empty, just the assemblies, folder structure and a tiny bit of basic configuration. Basic - Like Empty, but with a bit of UI preconfigured (css / images / bundling). Internet - This has both a Home and Account controller and associated views. The Account Controller supports registration and login via either local accounts and via OAuth / OpenID providers. Intranet - Like the Internet template, but it's preconfigured for Windows Authentication. Mobile - This is preconfigured using jQuery Mobile and is intended for mobile-only sites. Web API - This is preconfigured for a service backend built on ASP.NET Web API. Out of these templates, only one (the Internet template) uses SimpleMembership. ASP.NET MVC 4 Basic template The Basic template has configuration in place to use ASP.NET Membership with the Universal Providers. You can see that configuration in the ASP.NET MVC 4 Basic template's web.config: <profile defaultProvider="DefaultProfileProvider"> <providers> <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </profile> <membership defaultProvider="DefaultMembershipProvider"> <providers> <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <roleManager defaultProvider="DefaultRoleProvider"> <providers> <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </roleManager> <sessionState mode="InProc" customProvider="DefaultSessionProvider"> <providers> <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" /> </providers> </sessionState> This means that it's business as usual for the Basic template as far as ASP.NET Membership works. ASP.NET MVC 4 Internet template The Internet template has a few things set up to bootstrap SimpleMembership: \Models\AccountModels.cs defines a basic user account and includes data annotations to define keys and such \Filters\InitializeSimpleMembershipAttribute.cs creates the membership database using the above model, then calls WebSecurity.InitializeDatabaseConnection which verifies that the underlying tables are in place and marks initialization as complete (for the application's lifetime) \Controllers\AccountController.cs makes heavy use of OAuthWebSecurity (for OAuth account registration / login / management) and WebSecurity. WebSecurity provides account management services for ASP.NET MVC (and Web Pages) WebSecurity can work with any ExtendedMembershipProvider. There's one in the box (SimpleMembershipProvider) but you can write your own. Since a standard MembershipProvider is not an ExtendedMembershipProvider, WebSecurity will throw exceptions if the default membership provider is a MembershipProvider rather than an ExtendedMembershipProvider. Practical example: Create a new ASP.NET MVC 4 application using the Internet application template Install the Microsoft ASP.NET Universal Providers for LocalDB NuGet package Run the application, click on Register, add a username and password, and click submit You'll get the following execption in AccountController.cs::Register: To call this method, the "Membership.Provider" property must be an instance of "ExtendedMembershipProvider". This occurs because the ASP.NET Universal Providers packages include a web.config transform that will update your web.config to add the Universal Provider configuration I showed in the Basic template example above. When WebSecurity tries to use the configured ASP.NET Membership Provider, it checks if it can be cast to an ExtendedMembershipProvider before doing anything else. So, what do you do? Options: If you want to use the new AccountController, you'll either need to use the SimpleMembershipProvider or another valid ExtendedMembershipProvider. This is pretty straightforward. If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things: Replace  the AccountController.cs and AccountModels.cs in an ASP.NET MVC 4 Internet project with one from an ASP.NET MVC 3 application (you of course won't have OAuth support). Then, if you want, you can go through and remove other things that were built around SimpleMembership - the OAuth partial view, the NuGet packages (e.g. the DotNetOpenAuthAuth package, etc.) Use an ASP.NET MVC 4 Internet application template and add in a Universal Providers NuGet package. Then copy in the AccountController and AccountModel classes. Create an ASP.NET MVC 3 project and upgrade it to ASP.NET MVC 4 using the steps shown in the ASP.NET MVC 4 release notes. None of these are particularly elegant or simple. Maybe we (or just me?) can do something to make this simpler - perhaps a NuGet package. However, this should be an edge case - hopefully the cases where you'd need to create a new ASP.NET but use legacy ASP.NET Membership Providers should be pretty rare. Please let me (or, preferably the team) know if that's an incorrect assumption. Membership in the ASP.NET 4.5 project template ASP.NET 4.5 Web Forms took a different approach which builds off ASP.NET Membership. Instead of using the WebMatrix security assemblies, Web Forms uses Microsoft.AspNet.Membership.OpenAuth assembly. I'm no expert on this, but from a bit of time in ILSpy and Visual Studio's (very pretty) dependency graphs, this uses a Membership Adapter to save OAuth data into an EF managed database while still running on top of ASP.NET Membership. Note: There may be a way to use this in ASP.NET MVC 4, although it would probably take some plumbing work to hook it up. How does this fit in with Universal Providers (System.Web.Providers)? Just to summarize: Universal Providers are intended for cases where you have an existing ASP.NET Membership Provider and you want to use it with another SQL Server database backend (other than SQL Server). It doesn't require agents to handle expired session cleanup and other background tasks, it piggybacks these tasks on other calls. Universal Providers are not really, strictly speaking, universal - at least to my way of thinking. They only work with databases in the SQL Server family. Universal Providers do not work with Simple Membership. The Universal Providers packages include some web config transforms which you would normally want when you're using them. What about the Web Site Administration Tool? Visual Studio includes tooling to launch the Web Site Administration Tool (WSAT) to configure users and roles in your application. WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)

    Read the article

  • How To Run XP Mode in VirtualBox on Windows 7 (sort of)

    - by Matthew Guay
    A few weeks ago we showed you how to run XP Mode on a Windows 7 computer without Hardware Virtualization using VMware. Some of you have been asking if it can be done in Virtual Box as well. The answer is “Yes!” and here we’ll show you how. Editor Update: Apparently there isn’t a way to activate XP Mode through VirtualBox using this method. You will however, be able to run it for 30 days. We have a new updated article on how to Install XP Mode with VirtualBox Using the VMLite Plugin.   Earlier we showed you how to run XP mode on windows 7 machines without hardware virtualization capability. Since then, a lot of you have been asking to a write up a tutorial about doing the same thing using VirtualBox.  This makes it another great way to run XP Mode if your computer does not have hardware virtualization.  Here we’ll see how to import the XP Mode from Windows 7 Professional, Enterprise, or Ultimate into VirtualBox so you can run XP in it for free. Note: You need to have Windows 7 Professional or above to use XP Mode in this manner. In our tests we were able to get it to run on Home Premium as well, but you’ll be breaking Windows 7 licensing agreements. Getting Started First, download and install XP Mode (link below).  There is no need to download Virtual PC if your computer cannot run it, so just download the XP Mode from the link on the left. Install XP mode; just follow the default prompts as usual. Now, download and install VirtualBox 3.1.2 or higher(link below).  Install as normal, and simply follow the default prompts. VirtualBox may notify you that your network connection will be reset during the installation.  Press Yes to continue. During the install, you may see several popups asking you if you wish to install device drivers for USB and Network interfaces.  Simply click install, as these are needed for VirtualBox to run correctly. Setup only took a couple minutes, and doesn’t require a reboot. Setup XP Mode in VirtualBox: First we need to copy the default XP Mode so VirtualBox will not affect the original copy.  Browse to C:\Program Files\Windows XP Mode, and copy the file “Windows XP Mode base.vhd”.  Paste it in another folder of your choice, such as your Documents folder. Once you’ve copied the file, right-click on it and click Properties. Uncheck the “Read-only” box in this dialog, and then click Ok. Now, in VirtualBox, click New to create a new virtual machine. Enter the name of your virtual machine, and make sure the operating system selected is Windows XP. Choose how much memory you want to allow the virtual machine to use.  VirtualBox’ default is 192 Mb ram, but for better performance you can select 256 or 512Mb. Now, select the hard drive for the virtual machine.  Select “Use existing hard disk”, then click the folder button to choose the XP Mode virtual drive. In this window, click Add, and then browse to find the copy of XP Mode you previously made. Make sure the correct virtual drive is selected, then press Select. After selecting the VHD your screen should look like the following then click Next. Verify the settings you made are correct. If not, you can go back and make any changes. When everything looks correct click Finish. Setup XP Mode Now, in VirtualBox, click start to run XP Mode. The Windows XP in this virtual drive is not fully setup yet, so you will have to go through the setup process.   If you didn’t uncheck the “Read-only” box in the VHD properties before, you may see the following error.  If you see it, go back and check the file to makes sure it is not read-only. When you click in the virtual machine, it will capture your mouse by default.  Simply press the right Ctrl key to release your mouse so you can go back to using Windows 7.  This will only be the case during the setup process; after the Guest Additions are installed, the mouse will seamlessly move between operating systems. Now, accept the license agreement in XP.   Choose your correct locale and keyboard settings. Enter a name for your virtual XP, and an administrative password. Check the date, time, and time zone settings, and adjust them if they are incorrect.  The time and date are usually correct, but the time zone often has to be corrected. XP will now automatically finish setting up your virtual machine, and then will automatically reboot. After rebooting, select your automatic update settings. You may see a prompt to check for drivers; simply press cancel, as all the drivers we need will be installed later with the Guest Additions. Your last settings will be finalized, and finally you will see your XP desktop in VirtualBox. Please note that XP Mode may not remain activated after importing it into VirtualBox. When you activate it, use the key that is located at C:\Program Files\Windows XP Mode\key.txt.  Note: During our tests we weren’t able to get the activation to go through. We are looking into the issue and will have a revised article showing the correct way to get XP Mode in VirutalBox working correctly soon.    Now we have one final thing to install – the VirtualBox Guest Additions.  In the VirtualBox window, click “Devices” and then select “Install Guest Additions”. This should automatically launch in XP; if it doesn’t, click Start, then My Computer, and finally double-click on the CD drive which should say VirtualBox Guest Additions. Simply install with the normal presets. You can select to install an experimental 3D graphics driver if you wish to try to run games in XP in VirtualBox; however, do note that this is not fully supported and is currently a test feature. You may see a prompt informing you that the drivers have not passed Logo testing; simply press “Continue Anyway” to proceed with the installation.   When installation has completed, you will be required to reboot your virtual machine. Now, you can move your mouse directly from Windows XP to Windows 7 without pressing Ctrl. Integrating with Windows 7 Once your virtual machine is rebooted, you can integrate it with your Windows 7 desktop.  In the VirtualBox window, click Machine and then select “Seamless Mode”.   In Seamless mode you’ll have the XP Start menu and taskbar sit on top of your Windows 7 Start and Taskbar. Here we see XP running on Virtual Box in Seamless Mode. We have the old XP WordPad sitting next to the new Windows 7 version of WordPad. Another view of everything running seamlessly together on the same Windows 7 desktop. Hover the pointer over the XP taskbar to pull up the Virtual Box menu items. You can exit out of Seamless Mode from the VirtualBox menu or using “Ctrl+L”. Then you go back to having it run separately on your desktop again. Conclusion Running XP Mode in a Virtual Machine is a great way to experience the feature on computers without Hardware Virtualization capabilities. If you prefer VMware Player, then you’ll want to check out our articles on how to run XP Mode on Windows 7 machines without Hardware Virtualization, and how to create an XP Mode for Windows 7 Home Premium and Vista. Download VirtualBox Download XP Mode Similar Articles Productive Geek Tips Install XP Mode with VirtualBox Using the VMLite PluginUsing Windows 7 or Vista Compatibility ModeMake Safari Stop Crashing Every 20 Seconds on Windows VistaForce Windows 7 / Vista to Boot Into Safe Mode Without Using the F8 KeyHow To Run Chrome OS in VirtualBox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • How To Run XP Mode in VirtualBox on Windows 7 (sort of)

    - by Matthew Guay
    A few weeks ago we showed you how to run XP Mode on a Windows 7 computer without Hardware Virtualization using VMware. Some of you have been asking if it can be done in Virtual Box as well. The answer is “Yes!” and here we’ll show you how. Editor Update: Apparently there isn’t a way to activate XP Mode through VirtualBox using this method. You will however, be able to run it for 30 days. We have a new updated article on how to Install XP Mode with VirtualBox Using the VMLite Plugin.   Earlier we showed you how to run XP mode on windows 7 machines without hardware virtualization capability. Since then, a lot of you have been asking to a write up a tutorial about doing the same thing using VirtualBox.  This makes it another great way to run XP Mode if your computer does not have hardware virtualization.  Here we’ll see how to import the XP Mode from Windows 7 Professional, Enterprise, or Ultimate into VirtualBox so you can run XP in it for free. Note: You need to have Windows 7 Professional or above to use XP Mode in this manner. In our tests we were able to get it to run on Home Premium as well, but you’ll be breaking Windows 7 licensing agreements. Getting Started First, download and install XP Mode (link below).  There is no need to download Virtual PC if your computer cannot run it, so just download the XP Mode from the link on the left. Install XP mode; just follow the default prompts as usual. Now, download and install VirtualBox 3.1.2 or higher(link below).  Install as normal, and simply follow the default prompts. VirtualBox may notify you that your network connection will be reset during the installation.  Press Yes to continue. During the install, you may see several popups asking you if you wish to install device drivers for USB and Network interfaces.  Simply click install, as these are needed for VirtualBox to run correctly. Setup only took a couple minutes, and doesn’t require a reboot. Setup XP Mode in VirtualBox: First we need to copy the default XP Mode so VirtualBox will not affect the original copy.  Browse to C:\Program Files\Windows XP Mode, and copy the file “Windows XP Mode base.vhd”.  Paste it in another folder of your choice, such as your Documents folder. Once you’ve copied the file, right-click on it and click Properties. Uncheck the “Read-only” box in this dialog, and then click Ok. Now, in VirtualBox, click New to create a new virtual machine. Enter the name of your virtual machine, and make sure the operating system selected is Windows XP. Choose how much memory you want to allow the virtual machine to use.  VirtualBox’ default is 192 Mb ram, but for better performance you can select 256 or 512Mb. Now, select the hard drive for the virtual machine.  Select “Use existing hard disk”, then click the folder button to choose the XP Mode virtual drive. In this window, click Add, and then browse to find the copy of XP Mode you previously made. Make sure the correct virtual drive is selected, then press Select. After selecting the VHD your screen should look like the following then click Next. Verify the settings you made are correct. If not, you can go back and make any changes. When everything looks correct click Finish. Setup XP Mode Now, in VirtualBox, click start to run XP Mode. The Windows XP in this virtual drive is not fully setup yet, so you will have to go through the setup process.   If you didn’t uncheck the “Read-only” box in the VHD properties before, you may see the following error.  If you see it, go back and check the file to makes sure it is not read-only. When you click in the virtual machine, it will capture your mouse by default.  Simply press the right Ctrl key to release your mouse so you can go back to using Windows 7.  This will only be the case during the setup process; after the Guest Additions are installed, the mouse will seamlessly move between operating systems. Now, accept the license agreement in XP.   Choose your correct locale and keyboard settings. Enter a name for your virtual XP, and an administrative password. Check the date, time, and time zone settings, and adjust them if they are incorrect.  The time and date are usually correct, but the time zone often has to be corrected. XP will now automatically finish setting up your virtual machine, and then will automatically reboot. After rebooting, select your automatic update settings. You may see a prompt to check for drivers; simply press cancel, as all the drivers we need will be installed later with the Guest Additions. Your last settings will be finalized, and finally you will see your XP desktop in VirtualBox. Please note that XP Mode may not remain activated after importing it into VirtualBox. When you activate it, use the key that is located at C:\Program Files\Windows XP Mode\key.txt.  Note: During our tests we weren’t able to get the activation to go through. We are looking into the issue and will have a revised article showing the correct way to get XP Mode in VirutalBox working correctly soon.    Now we have one final thing to install – the VirtualBox Guest Additions.  In the VirtualBox window, click “Devices” and then select “Install Guest Additions”. This should automatically launch in XP; if it doesn’t, click Start, then My Computer, and finally double-click on the CD drive which should say VirtualBox Guest Additions. Simply install with the normal presets. You can select to install an experimental 3D graphics driver if you wish to try to run games in XP in VirtualBox; however, do note that this is not fully supported and is currently a test feature. You may see a prompt informing you that the drivers have not passed Logo testing; simply press “Continue Anyway” to proceed with the installation.   When installation has completed, you will be required to reboot your virtual machine. Now, you can move your mouse directly from Windows XP to Windows 7 without pressing Ctrl. Integrating with Windows 7 Once your virtual machine is rebooted, you can integrate it with your Windows 7 desktop.  In the VirtualBox window, click Machine and then select “Seamless Mode”.   In Seamless mode you’ll have the XP Start menu and taskbar sit on top of your Windows 7 Start and Taskbar. Here we see XP running on Virtual Box in Seamless Mode. We have the old XP WordPad sitting next to the new Windows 7 version of WordPad. Another view of everything running seamlessly together on the same Windows 7 desktop. Hover the pointer over the XP taskbar to pull up the Virtual Box menu items. You can exit out of Seamless Mode from the VirtualBox menu or using “Ctrl+L”. Then you go back to having it run separately on your desktop again. Conclusion Running XP Mode in a Virtual Machine is a great way to experience the feature on computers without Hardware Virtualization capabilities. If you prefer VMware Player, then you’ll want to check out our articles on how to run XP Mode on Windows 7 machines without Hardware Virtualization, and how to create an XP Mode for Windows 7 Home Premium and Vista. Download VirtualBox Download XP Mode Similar Articles Productive Geek Tips Install XP Mode with VirtualBox Using the VMLite PluginUsing Windows 7 or Vista Compatibility ModeMake Safari Stop Crashing Every 20 Seconds on Windows VistaForce Windows 7 / Vista to Boot Into Safe Mode Without Using the F8 KeyHow To Run Chrome OS in VirtualBox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Content Management for WebCenter Installation Guide

    - by Gary Niu
    Overvew As we known, there are two way to install Content Management for WebCenter. One way is install it by WebCenter installer wizard, another way is to install it use their own installer. This guide is for the later one. For SSO purpose, I also mentioned how to config OID identity store for Content Management for WebCenter. Content Management for WebCenter( 10.1.3.5.1) Oracle Enterprise Linux R5U4 Basic Installation -bash-3.2$ ./setup.sh Please select your locale from the list.           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Throughout the install, when entering a text value, you can press Enter to accept the default that appears between square brackets ([]). When selecting from a list, you can select the choice followed by an asterisk by pressing Enter. Select installation type from the list.         *1. Install new server          2. Update a server Choice? Content Server Installation Directory Please enter the full pathname to the installation directory. Content Server Core Folder [/oracle/ucm/server]:/opt/oracle/ucm/server Create Directory         *1. yes          2. no Choice? Java virtual machine         *1. Sun Java 1.5.0_11 JDK          2. Specify a custom Java virtual machine Choice? Installing with Java version 1.5.0_11. Enter the location of the native file repository. This directory contains the native files checked in by contributors. Content Server Native Vault Folder [/opt/oracle/ucm/server/vault/]: Create Directory         *1. yes          2. no Choice? Enter the location of the web-viewable file repository. This directory contains files that can be accessed through the web server. Content Server Weblayout Folder [/opt/oracle/ucm/server/weblayout/]: Create Directory         *1. yes          2. no Choice? This server can be configured to manage its own authentication or to allow another master to act as an authentication proxy. Configure this server as a master or proxied server.         *1. Configure as a master server.          2. Configure as server proxied by a local master server. Choice? During installation, an admin server can be installed and configured to manage this server. If there is already an admin server on this system, you can have the installer configure it to administrate this server instead. Select admin server configuration.         *1. Install an admin server to manage this server.          2. Configure an existing admin server to manage this server.          3. Don't configure an admin server. Choice? Enter the location of an executable to start your web browser. This browser will be used to display the online help. Web Browser Path [/usr/bin/firefox]: Content Server System locale           1. Chinese-Simplified           2. Chinese-Traditional           3. Deutsch          *4. English-US           5. English-UK           6. Español           7. Français           8. Italiano           9. Japanese          10. Korean          11. Nederlands          12. Português-Brazil Choice? Please select the region for your timezone from the list.         *1. Use the timezone setting for your operating system          2. Pacific          3. America          4. Atlantic          5. Europe          6. Africa          7. Asia          8. Indian          9. Australia Choice? Please enter the port number that will be used to connect to the Content Server. This port must be otherwise unused. Content Server Port [4444]: Please enter the port number that will be used to connect to the Admin Server. This port must be otherwise unused. Admin Server Port [4440]: Enter a security filter for the server port. Hosts which are allowed to communicate directly with the server port may access any resources managed by the server. Insure that hosts which need access are included in the filter. See the installation guide for more details. Incoming connection address filter [127.0.0.1]:*.*.*.* *** Content Server URL Prefix The URL prefix specified here is used when generating HTML pages that refer to the contents of the weblayout directory within the installation. This prefix must be mapped in the web server Additional Document Directories section of the Content Management administration menu to the physical location of the weblayout directory. For example, "/idc/" would be used in your installation to refer to the URL http://ucm.company.com/idc which would be mapped in the web server to the physical location /oracle/ucm/server/weblayout. Web Server Relative Root [/idc/]: Enter the name of the local mail server. The server will contact this system to deliver email. Company Mail Server [mail]: Enter the e-mail address for the system administrator. Administrator E-Mail Address [sysadmin@mail]: *** Web Server Address Many generated HTML pages refer to the web server you are using. The address specified here will be used when generating those pages. The address should include the host and domain name in most cases. If your webserver is running on a port other than 80, append a colon and the port number. Examples: www.company.com, ucm.company.com:90 Web Server HTTP Address [yekki]:yekki.cn.oracle.com:7777 Enter the name for this instance. This name should be unique across your entire enterprise. It may not contain characters other than letters, numbers, and underscores. Server Instance Name [idc]: Enter a short label for this instance. This label is used on web pages to identify this instance. It should be less than 12 characters long. Server Instance Label [idc]: Enter a long description for this instance. Server Description [Content Server idc]: Web Server         *1. Apache          2. Sun ONE          3. Configure manually Choice? Please select a database from the list below to use with the Content Server. Content Server Database         *1. Oracle          2. Microsoft SQL Server 2005          3. Microsoft SQL Server 2000          4. Sybase          5. DB2          6. Custom JDBC settings          7. Skip database configuration Choice? Manually configure JDBC settings for this database          1. yes         *2. no Choice? Oracle Server Hostname [localhost]: Oracle Listener Port Number [1521]: *** Database User ID The user name is used to log into the database used by the content server. Oracle User [user]:YEKKI_OCSERVER *** Database Password The password is used to log into the database used by the content server. Oracle Password []:oracle Oracle Instance Name [ORACLE]:orcl Configure the JVM to find the JDBC driver in a specific jar file          1. yes         *2. no Choice? The installer can attempt to create the database tables or you can manually create them. If you choose to manually create the tables, you should create them now. Attempt to create database tables          1. yes         *2. no Choice? Select components to install.          1. ContentFolios: Collect related items in folios          2. Folders_g: Organize content into hierarchical folders          3. LinkManager8: Hypertext link management support          4. OracleTextSearch: External Oracle 11g database as search indexer support          5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: 1,2,3,4,5         *1. ContentFolios: Collect related items in folios         *2. Folders_g: Organize content into hierarchical folders         *3. LinkManager8: Hypertext link management support         *4. OracleTextSearch: External Oracle 11g database as search indexer support         *5. ThreadedDiscussions: Threaded discussion management Enter numbers separated by commas to toggle, 0 to unselect all, F to finish: F Checking configuration. . . Configuration OK. Review install settings. . . Content Server Core Folder: /opt/oracle/ucm/server Java virtual machine: Sun Java 1.5.0_11 JDK Content Server Native Vault Folder: /opt/oracle/ucm/server/vault/ Content Server Weblayout Folder: /opt/oracle/ucm/server/weblayout/ Proxy authentication through another server: no Install admin server: yes Web Browser Path: /usr/bin/firefox Content Server System locale: English-US Content Server Port: 4444 Admin Server Port: 4440 Incoming connection address filter: *.*.*.* Web Server Relative Root: /idc/ Company Mail Server: mail Administrator E-Mail Address: sysadmin@mail Web Server HTTP Address: yekki.cn.oracle.com:7777 Server Instance Name: idc Server Instance Label: idc Server Description: Content Server idc Web Server: Apache Content Server Database: Oracle Manually configure JDBC settings for this database: false Oracle Server Hostname: localhost Oracle Listener Port Number: 1521 Oracle User: YEKKI_OCSERVER Oracle Password: 6GP1gBgzSyKa4JW10U8UqqPznr/lzkNn/Ojf6M8GJ8I= Oracle Instance Name: orcl Configure the JVM to find the JDBC driver in a specific jar file: false Attempt to create database tables: no Components: ContentFolios,Folders_g,LinkManager8,OracleTextSearch,ThreadedDiscussions Proceed with install         *1. Proceed          2. Change configuration          3. Recheck the configuration          4. Abort installation Choice? Finished install type Install with warnings at 4/2/10 12:32 AM. Run Scripts -bash-3.2$ ./wc_contentserverconfig.sh /opt/oracle/ucm/server /mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/CS10gR35UpdateBundle.zip' Service 'DELETE_DOC' Extended Service 'DELETE_BYREV_REVISION' Extended Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/ContentAccess/ContentAccess-linux.zip' (internal)      04.02 00:40:38.019      main    updateDocMetaDefinitionV11: adding decimal column Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/Folders_g.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/FusionLibraries.zip' Installing '/opt/oracle/ucm/server/custom/CS10gR35UpdateBundle/extras/JpsUserProvider.zip' Installing '/mnt/hgfs/SOFTWARE/ofm_ucm_generic_10.1.3.5.1_disk1_1of1/ContentServer/webcenter-conf/WcConfigure.zip' Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:24 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:27 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.core.util.JpsConfigUtil getPasswordCredential WARNING: A password credential is expected; instead found . Apr 2, 2010 12:41:28 AM oracle.security.jps.internal.idstore.util.IdentityStoreUtil getUnamePwdFromCredStore WARNING: The credential with map JPS and key ldap.credential does not exist. Restart Content Server to apply updates. Configuring Apache Web Server append the following lines at httpd.conf: include "/opt/oracle/ucm/server/data/users/apache22/apache.conf" Configuring the Identity Store( Optional ) 1.  Stop Oracle Content Server and the Admin Server 2.  Update the Oracle Content Server's JPS configuration file, jps-config.xml: a. add a service instance <serviceInstance provider="idstore.ldap.provider" name="idstore.oid"> <property name="subscriber.name" value="dc=cn,dc=oracle,dc=com"></property> <property name="idstore.type" value="OID"></property> <property name="security.principal.key" value="ldap.credential"></property> <property name="security.principal.alias" value="JPS"></property> <property name="ldap.url" value="ldap://yekki.cn.oracle.com:3060"></property> <extendedProperty> <name>user.search.bases</name> <values> <value>cn=users,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <extendedProperty> <name>group.search.bases</name> <values> <value>cn=groups,dc=cn,dc=oracle,dc=com</value> </values> </extendedProperty> <property name="username.attr" value="uid"></property> <property name="user.login.attr" value="uid"></property> <property name="groupname.attr" value="cn"></property> </serviceInstance> b. Ensure that the <jpsContext> entry in the jps-config.xml file refers to the new serviceInstance, that is, idstore.oid and not idstore.ldap: <jpsContext name="default"> <serviceInstanceRef ref="idstore.oid"/> 3. Run the new script to setup the credentials for idstore.oid in the credential store: cd CONTENT_SERVER_HOME/custom/FusionLibraries/tools -bash-3.2$ ./run_credtool.sh Buildfile: ./../tools/credtool.xml     [input] skipping input as property action has already been set.     [input] Alias: [JPS]     [input] Key: [ldap.credential]     [input] User Name: cn=orcladmin     [input] Password: welcome1     [input] JPS Config: [/opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml] manage-creds:      [echo] @@@ Help: run 'ant manage-creds' command to see the detailed usage      [java] Using default context in /opt/oracle/ucm/server/custom/FusionLibraries/tools/../../../config/jps-config.xml file for credential store.      [java] Credential store location : /opt/oracle/ucm/server/config      [java] Credential with map JPS key ldap.credential stored successfully!      [java]      [java]      [java]     Credential for map JPS and key ldap.credential is:      [java]             PasswordCredential name : cn=orcladmin      [java]             PasswordCredential password : welcome1 BUILD SUCCESSFUL Total time: 1 minute 27 seconds Testing 1. acces http://yekki.cn.oracle.com:7777/idc 2. login in with OID user, for example: orcladmin/welcome1 3. make sure your JpsUserProvider status is "good"

    Read the article

  • Quantal: Broken apt-index, cant fix dependencies

    - by arcyqwerty
    I can't seem to add/remove/update packages Ubuntu software update has a notice about partial upgrades but fails Seems to be similar to this problem $ sudo apt-get update Ign http://archive.ubuntu.com quantal InRelease Ign http://security.ubuntu.com precise-security InRelease Ign http://us.archive.ubuntu.com precise InRelease Ign http://extras.ubuntu.com precise InRelease Ign http://us.archive.ubuntu.com precise-updates InRelease Ign http://us.archive.ubuntu.com precise-backports InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://archive.canonical.com precise InRelease Ign http://ppa.launchpad.net precise InRelease Ign http://ppa.launchpad.net precise InRelease Hit http://archive.ubuntu.com quantal Release.gpg Get:1 http://security.ubuntu.com precise-security Release.gpg [198 B] Hit http://extras.ubuntu.com precise Release.gpg Get:2 http://us.archive.ubuntu.com precise Release.gpg [198 B] Get:3 http://us.archive.ubuntu.com precise-updates Release.gpg [198 B] Hit http://archive.canonical.com precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release.gpg Hit http://archive.ubuntu.com quantal Release Get:4 http://security.ubuntu.com precise-security Release [49.6 kB] Get:5 http://us.archive.ubuntu.com precise-backports Release.gpg [198 B] Get:6 http://us.archive.ubuntu.com precise Release [49.6 kB] Hit http://extras.ubuntu.com precise Release Hit http://archive.canonical.com precise Release Hit http://ppa.launchpad.net precise Release.gpg Hit http://ppa.launchpad.net precise Release Hit http://archive.ubuntu.com quantal/main amd64 Packages Hit http://extras.ubuntu.com precise/main Sources Hit http://archive.canonical.com precise/partner Sources Hit http://ppa.launchpad.net precise Release Hit http://ppa.launchpad.net precise Release Get:7 http://us.archive.ubuntu.com precise-updates Release [49.6 kB] Hit http://extras.ubuntu.com precise/main amd64 Packages Hit http://extras.ubuntu.com precise/main i386 Packages Hit http://archive.ubuntu.com quantal/main i386 Packages Hit http://archive.ubuntu.com quantal/main Translation-en Hit http://archive.canonical.com precise/partner amd64 Packages Hit http://archive.canonical.com precise/partner i386 Packages Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main amd64 Packages Hit http://ppa.launchpad.net precise/main i386 Packages Get:8 http://us.archive.ubuntu.com precise-backports Release [49.6 kB] Hit http://ppa.launchpad.net precise/main Sources Get:9 http://security.ubuntu.com precise-security/main Sources [22.5 kB] Hit http://ppa.launchpad.net precise/main amd64 Packages Hit http://ppa.launchpad.net precise/main i386 Packages Hit http://ppa.launchpad.net precise/main Sources Hit http://ppa.launchpad.net precise/main amd64 Packages Hit http://ppa.launchpad.net precise/main i386 Packages Get:10 http://us.archive.ubuntu.com precise/main Sources [934 kB] Get:11 http://security.ubuntu.com precise-security/restricted Sources [14 B] Get:12 http://security.ubuntu.com precise-security/universe Sources [7,832 B] Ign http://archive.ubuntu.com quantal/main Translation-en_US Get:13 http://security.ubuntu.com precise-security/multiverse Sources [713 B] Get:14 http://security.ubuntu.com precise-security/main amd64 Packages [67.8 kB] Ign http://archive.canonical.com precise/partner Translation-en_US Get:15 http://security.ubuntu.com precise-security/restricted amd64 Packages [14 B] Get:16 http://security.ubuntu.com precise-security/universe amd64 Packages [18.8 kB] Ign http://extras.ubuntu.com precise/main Translation-en_US Ign http://archive.canonical.com precise/partner Translation-en Get:17 http://security.ubuntu.com precise-security/multiverse amd64 Packages [1,155 B] Get:18 http://security.ubuntu.com precise-security/main i386 Packages [70.2 kB] Ign http://extras.ubuntu.com precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Get:19 http://security.ubuntu.com precise-security/restricted i386 Packages [14 B] Get:20 http://security.ubuntu.com precise-security/universe i386 Packages [19.0 kB] Get:21 http://security.ubuntu.com precise-security/multiverse i386 Packages [1,394 B] Ign http://ppa.launchpad.net precise/main Translation-en Hit http://security.ubuntu.com precise-security/main Translation-en Hit http://security.ubuntu.com precise-security/multiverse Translation-en Hit http://security.ubuntu.com precise-security/restricted Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en Hit http://security.ubuntu.com precise-security/universe Translation-en Ign http://security.ubuntu.com precise-security/main Translation-en_US Ign http://security.ubuntu.com precise-security/multiverse Translation-en_US Ign http://security.ubuntu.com precise-security/restricted Translation-en_US Ign http://security.ubuntu.com precise-security/universe Translation-en_US Get:22 http://us.archive.ubuntu.com precise/restricted Sources [5,470 B] Get:23 http://us.archive.ubuntu.com precise/universe Sources [5,019 kB] Get:24 http://us.archive.ubuntu.com precise/multiverse Sources [155 kB] Get:25 http://us.archive.ubuntu.com precise/main amd64 Packages [1,273 kB] Get:26 http://us.archive.ubuntu.com precise/restricted amd64 Packages [8,452 B] Get:27 http://us.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB] Get:28 http://us.archive.ubuntu.com precise/multiverse amd64 Packages [119 kB] Get:29 http://us.archive.ubuntu.com precise/main i386 Packages [1,274 kB] Get:30 http://us.archive.ubuntu.com precise/restricted i386 Packages [8,431 B] Get:31 http://us.archive.ubuntu.com precise/universe i386 Packages [4,796 kB] Get:32 http://us.archive.ubuntu.com precise/multiverse i386 Packages [121 kB] Hit http://us.archive.ubuntu.com precise/main Translation-en Hit http://us.archive.ubuntu.com precise/multiverse Translation-en Hit http://us.archive.ubuntu.com precise/restricted Translation-en Hit http://us.archive.ubuntu.com precise/universe Translation-en Get:33 http://us.archive.ubuntu.com precise-updates/main Sources [124 kB] Get:34 http://us.archive.ubuntu.com precise-updates/restricted Sources [1,379 B] Get:35 http://us.archive.ubuntu.com precise-updates/universe Sources [30.9 kB] Get:36 http://us.archive.ubuntu.com precise-updates/multiverse Sources [1,058 B] Get:37 http://us.archive.ubuntu.com precise-updates/main amd64 Packages [311 kB] Get:38 http://us.archive.ubuntu.com precise-updates/restricted amd64 Packages [2,417 B] Get:39 http://us.archive.ubuntu.com precise-updates/universe amd64 Packages [85.4 kB] Get:40 http://us.archive.ubuntu.com precise-updates/multiverse amd64 Packages [1,829 B] Get:41 http://us.archive.ubuntu.com precise-updates/main i386 Packages [314 kB] Get:42 http://us.archive.ubuntu.com precise-updates/restricted i386 Packages [2,439 B] Get:43 http://us.archive.ubuntu.com precise-updates/universe i386 Packages [85.9 kB] Get:44 http://us.archive.ubuntu.com precise-updates/multiverse i386 Packages [2,047 B] Hit http://us.archive.ubuntu.com precise-updates/main Translation-en Hit http://us.archive.ubuntu.com precise-updates/multiverse Translation-en Hit http://us.archive.ubuntu.com precise-updates/restricted Translation-en Hit http://us.archive.ubuntu.com precise-updates/universe Translation-en Get:45 http://us.archive.ubuntu.com precise-backports/main Sources [1,845 B] Get:46 http://us.archive.ubuntu.com precise-backports/restricted Sources [14 B] Get:47 http://us.archive.ubuntu.com precise-backports/universe Sources [11.1 kB] Get:48 http://us.archive.ubuntu.com precise-backports/multiverse Sources [1,383 B] Get:49 http://us.archive.ubuntu.com precise-backports/main amd64 Packages [1,271 B] Get:50 http://us.archive.ubuntu.com precise-backports/restricted amd64 Packages [14 B] Get:51 http://us.archive.ubuntu.com precise-backports/universe amd64 Packages [9,701 B] Get:52 http://us.archive.ubuntu.com precise-backports/multiverse amd64 Packages [996 B] Get:53 http://us.archive.ubuntu.com precise-backports/main i386 Packages [1,271 B] Get:54 http://us.archive.ubuntu.com precise-backports/restricted i386 Packages [14 B] Get:55 http://us.archive.ubuntu.com precise-backports/universe i386 Packages [9,703 B] Get:56 http://us.archive.ubuntu.com precise-backports/multiverse i386 Packages [999 B] Hit http://us.archive.ubuntu.com precise-backports/main Translation-en Hit http://us.archive.ubuntu.com precise-backports/multiverse Translation-en Hit http://us.archive.ubuntu.com precise-backports/restricted Translation-en Hit http://us.archive.ubuntu.com precise-backports/universe Translation-en Ign http://us.archive.ubuntu.com precise/main Translation-en_US Ign http://us.archive.ubuntu.com precise/multiverse Translation-en_US Ign http://us.archive.ubuntu.com precise/restricted Translation-en_US Ign http://us.archive.ubuntu.com precise/universe Translation-en_US Ign http://us.archive.ubuntu.com precise-updates/main Translation-en_US Ign http://us.archive.ubuntu.com precise-updates/multiverse Translation-en_US Ign http://us.archive.ubuntu.com precise-updates/restricted Translation-en_US Ign http://us.archive.ubuntu.com precise-updates/universe Translation-en_US Ign http://us.archive.ubuntu.com precise-backports/main Translation-en_US Ign http://us.archive.ubuntu.com precise-backports/multiverse Translation-en_US Ign http://us.archive.ubuntu.com precise-backports/restricted Translation-en_US Ign http://us.archive.ubuntu.com precise-backports/universe Translation-en_US Fetched 19.9 MB in 34s (571 kB/s) Reading package lists... Done $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: netbase : Breaks: ifupdown (< 0.7) Breaks: ifupdown:i386 (< 0.7) E: Unmet dependencies. Try using -f. $ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dh-apparmor html2text libmail-sendmail-perl libsys-hostname-long-perl Use 'apt-get autoremove' to remove them. The following extra packages will be installed: ifupdown Suggested packages: rdnssd The following packages will be upgraded: ifupdown 1 upgraded, 0 newly installed, 0 to remove and 1179 not upgraded. 85 not fully installed or removed. Need to get 0 B/54.1 kB of archives. After this operation, 19.5 kB of additional disk space will be used. Do you want to continue [Y/n]? (Reading database ... 222498 files and directories currently installed.) Preparing to replace ifupdown 0.7~beta2ubuntu8 (using .../ifupdown_0.7.1ubuntu1_amd64.deb) ... Unpacking replacement ifupdown ... dpkg: error processing /var/cache/apt/archives/ifupdown_0.7.1ubuntu1_amd64.deb (--unpack): trying to overwrite '/etc/init.d/networking', which is also in package netbase 5.0ubuntu1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: /var/cache/apt/archives/ifupdown_0.7.1ubuntu1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) cat /etc/apt/sources.list # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main deb http://archive.ubuntu.com/ubuntu/ quantal main

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >