Search Results

Search found 2696 results on 108 pages for 'compression formats'.

Page 32/108 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Little more help with writing a o buffer with libjpeg

    - by Richard Knop
    So I have managed to find another question discussing how to use the libjpeg to compress an image to jpeg. I have found this code which is supposed to work: Compressing IplImage to JPEG using libjpeg in OpenCV Here's the code (it compiles ok): /* This a custom destination manager for jpeglib that enables the use of memory to memory compression. See IJG documentation for details. */ typedef struct { struct jpeg_destination_mgr pub; /* base class */ JOCTET* buffer; /* buffer start address */ int bufsize; /* size of buffer */ size_t datasize; /* final size of compressed data */ int* outsize; /* user pointer to datasize */ int errcount; /* counts up write errors due to buffer overruns */ } memory_destination_mgr; typedef memory_destination_mgr* mem_dest_ptr; /* ------------------------------------------------------------- */ /* MEMORY DESTINATION INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* This function is called by the library before any data gets written */ METHODDEF(void) init_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; /* set destination buffer */ dest->pub.free_in_buffer = dest->bufsize; /* input buffer size */ dest->datasize = 0; /* reset output size */ dest->errcount = 0; /* reset error count */ } /* This function is called by the library if the buffer fills up I just reset destination pointer and buffer size here. Note that this behavior, while preventing seg faults will lead to invalid output streams as data is over- written. */ METHODDEF(boolean) empty_output_buffer (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->pub.next_output_byte = dest->buffer; dest->pub.free_in_buffer = dest->bufsize; ++dest->errcount; /* need to increase error count */ return TRUE; } /* Usually the library wants to flush output here. I will calculate output buffer size here. Note that results become incorrect, once empty_output_buffer was called. This situation is notified by errcount. */ METHODDEF(void) term_destination (j_compress_ptr cinfo) { mem_dest_ptr dest = (mem_dest_ptr)cinfo->dest; dest->datasize = dest->bufsize - dest->pub.free_in_buffer; if (dest->outsize) *dest->outsize += (int)dest->datasize; } /* Override the default destination manager initialization provided by jpeglib. Since we want to use memory-to-memory compression, we need to use our own destination manager. */ GLOBAL(void) jpeg_memory_dest (j_compress_ptr cinfo, JOCTET* buffer, int bufsize, int* outsize) { mem_dest_ptr dest; /* first call for this instance - need to setup */ if (cinfo->dest == 0) { cinfo->dest = (struct jpeg_destination_mgr *) (*cinfo->mem->alloc_small) ((j_common_ptr) cinfo, JPOOL_PERMANENT, sizeof (memory_destination_mgr)); } dest = (mem_dest_ptr) cinfo->dest; dest->bufsize = bufsize; dest->buffer = buffer; dest->outsize = outsize; /* set method callbacks */ dest->pub.init_destination = init_destination; dest->pub.empty_output_buffer = empty_output_buffer; dest->pub.term_destination = term_destination; } /* ------------------------------------------------------------- */ /* MEMORY SOURCE INTERFACE METHODS */ /* ------------------------------------------------------------- */ /* Called before data is read */ METHODDEF(void) init_source (j_decompress_ptr dinfo) { /* nothing to do here, really. I mean. I'm not lazy or something, but... we're actually through here. */ } /* Called if the decoder wants some bytes that we cannot provide... */ METHODDEF(boolean) fill_input_buffer (j_decompress_ptr dinfo) { /* we can't do anything about this. This might happen if the provided buffer is either invalid with regards to its content or just a to small bufsize has been given. */ /* fail. */ return FALSE; } /* From IJG docs: "it's not clear that being smart is worth much trouble" So I save myself some trouble by ignoring this bit. */ METHODDEF(void) skip_input_data (j_decompress_ptr dinfo, INT32 num_bytes) { /* There might be more data to skip than available in buffer. This clearly is an error, so screw this mess. */ if ((size_t)num_bytes > dinfo->src->bytes_in_buffer) { dinfo->src->next_input_byte = 0; /* no buffer byte */ dinfo->src->bytes_in_buffer = 0; /* no input left */ } else { dinfo->src->next_input_byte += num_bytes; dinfo->src->bytes_in_buffer -= num_bytes; } } /* Finished with decompression */ METHODDEF(void) term_source (j_decompress_ptr dinfo) { /* Again. Absolute laziness. Nothing to do here. Boring. */ } GLOBAL(void) jpeg_memory_src (j_decompress_ptr dinfo, unsigned char* buffer, size_t size) { struct jpeg_source_mgr* src; /* first call for this instance - need to setup */ if (dinfo->src == 0) { dinfo->src = (struct jpeg_source_mgr *) (*dinfo->mem->alloc_small) ((j_common_ptr) dinfo, JPOOL_PERMANENT, sizeof (struct jpeg_source_mgr)); } src = dinfo->src; src->next_input_byte = buffer; src->bytes_in_buffer = size; src->init_source = init_source; src->fill_input_buffer = fill_input_buffer; src->skip_input_data = skip_input_data; src->term_source = term_source; /* IJG recommend to use their function - as I don't know **** about how to do better, I follow this recommendation */ src->resync_to_restart = jpeg_resync_to_restart; } All I need to do is replace the jpeg_stdio_dest in my program with this code: int numBytes = 0; //size of jpeg after compression char * storage = new char[150000]; //storage buffer JOCTET *jpgbuff = (JOCTET*)storage; //JOCTET pointer to buffer jpeg_memory_dest(&cinfo,jpgbuff,150000,&numBytes); So I need some help to incorporate the above four lines into this function which now works but writes to a file instead of a memory: int write_jpeg_file( char *filename ) { struct jpeg_compress_struct cinfo; struct jpeg_error_mgr jerr; /* this is a pointer to one row of image data */ JSAMPROW row_pointer[1]; FILE *outfile = fopen( filename, "wb" ); if ( !outfile ) { printf("Error opening output jpeg file %s\n!", filename ); return -1; } cinfo.err = jpeg_std_error( &jerr ); jpeg_create_compress(&cinfo); jpeg_stdio_dest(&cinfo, outfile); /* Setting the parameters of the output file here */ cinfo.image_width = width; cinfo.image_height = height; cinfo.input_components = bytes_per_pixel; cinfo.in_color_space = color_space; /* default compression parameters, we shouldn't be worried about these */ jpeg_set_defaults( &cinfo ); /* Now do the compression .. */ jpeg_start_compress( &cinfo, TRUE ); /* like reading a file, this time write one row at a time */ while( cinfo.next_scanline < cinfo.image_height ) { row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components]; jpeg_write_scanlines( &cinfo, row_pointer, 1 ); } /* similar to read file, clean up after we're done compressing */ jpeg_finish_compress( &cinfo ); jpeg_destroy_compress( &cinfo ); fclose( outfile ); /* success code is 1! */ return 1; } Anybody could help me out a bit with it? I've tried meddling with it but I am not sure how to do it. I I just replace this line: jpeg_stdio_dest(&cinfo, outfile); It's not going to work. There is more stuff that needs to be changed a bit in that function and I am being a little lost from all those pointers and memory management.

    Read the article

  • Convert a PDF eBook to ePub Format

    - by Matthew Guay
    Would you like to read a PDF eBook on an eReader or mobile device, but aren’t happy with the performance? Here’s how you can convert your PDFs to the popular ePub format so you can easily read them on any device. PDFs are a popular format for eBooks since they render the same on any device and can preserve the exact layout of the print book.  However, this benefit is their major disadvantage on mobile devices, as you often have to zoom and pan back and forth to see everything on the page.  ePub files, on the other hand, are an increasingly popular option. They can reflow to fill your screen instead of sticking to a strict layout style.  With the free Calibre program, you can quickly convert your PDF eBooks to ePub format. Getting Started Download the Calibre installer (link below) for your operating system, and install as normal.  Calibre works on recent versions of Windows, OS X, and Linux.  The Calibre installer is very streamlined, so the install process was quite quick. Calibre is a great application for organizing your eBooks.  It can automatically sort your books by their metadata, and even display their covers in a Coverflow-style viewer. To add an eBook to your library, simply drag-and-drop the file into the Calibre window, or click Add books at the top.  Here you can choose to add all the books from a folder and more. Calibre will then add the book(s) to your library, import the associated metadata, and organize them in the catalog. Convert your Books Once you’ve imported your books into Calibre, it’s time to convert them to the format you want.  Select the book or books you want to convert, and click Convert E-books.  Select whether you want to convert them individually or bulk convert them. The convertor window has lots of options, so you can get your ePub book exactly like you want.  You can simply click Ok and go with the defaults, or you can tweak the settings. Do note that the conversion will only work successfully with PDFs that contain actual text.  Some PDFs are actually images scanned in from the original books; these will appear just like the PDF after the conversion, and won’t be any easier to read. On the first tab, you’ll notice that Calibri will repopulate most of the metadata fields with info from your PDF.  It will also use the first page of the PDF as the cover.  Edit any of the information that may be incorrect, and add any additional information you want associated with the book. If you want to convert your eBook to a different format other than ePub, Calibri’s got you covered, too.  On the top right, you can choose to output the converted eBook into a many different file formats, including the Kindle-friendly MOBI format. One other important settings page is the Structure Detection tab.  Here you can choose to have it remove headers and footers in the converted book, as well as automatically detect chapter breaks. Click Ok when you’ve finished choosing your settings and Calibre will convert the book.  This may take a few minutes, depending on the size of the PDF.  If the conversion seems to be taking too long, you can click Show job details for more information on the progress.   The conversion usually works good, but we did have one job freeze on us.  When we checked the job details, it indicated that the PDF was copy-protected.  Most PDF eBooks, however, worked fine. Now, back in the main Calibri window, select your book and save it to disk.  You can choose to save only the EPUB format, or you can select Save to disk to save all formats of the book to your computer. You can also view the ePub file directly in Calibri’s built-in eBook viewer.  This is the PDF book we converted, and it looks fairly good in the converted format.  It does have some odd line breaks and some misplaced numbers, but on the whole, the converted book is much easier to read, especially on small mobile devices.   Even images get included inline, so you shouldn’t be missing anything from the original eBook. Conclusion Calibri makes it simple to read your eBooks in any format you need. It is a project that is in constant development, and updates regularly adding better stability and features.  Whether you want to ready your PDF eBooks on a Sony Reader, Kindle, netbook or Smartphone, your books will now be more accessible than ever.  And with thousands of free PDF eBooks out there, you’ll be sure to always have something to read. If you’d like some Geeky PDF eBooks, Microsoft Press is offering a number of free PDF eBooks right now.  Check them out at this link (Account Required). Download the Calibre eBook program Similar Articles Productive Geek Tips Format a String as Currency in C#Convert Older Excel Documents to Excel 2007 FormatShare OneNote 2010 Notebooks with OneNote 2007Install an RPM Package on Ubuntu LinuxConvert PDF Files to Word Documents and Other Formats TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • Creating a Reverse Proxy with URL Rewrite for IIS

    - by OWScott
    There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy). Essentially, you can front the internal web server with a friendly URL, even hiding custom ports. For example, consider an internal web server with a URL of http://10.10.0.50:8111. You can make that available through a public URL like http://tools.mysite.com/ as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address. This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer. A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place. URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that. Getting Started First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic. After you’ve created your site then open up URL Rewrite at the site level. Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule. If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why. The next and final step of the template asks a few questions. The first textbox asks the name of the internal web server. In our example, it’s 10.10.0.50:8111. This can be any URL, including a subfolder like internal.mysite.com/blog. Don’t include the http or https here. The template assumes that it’s not entered. You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this. Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server. If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to 10.10.0.50:8111 (or whatever you enter) with tools.mysite.com in all <a>, <form>, and <img> tags on your site. That’s it! Well, there is a lot more that you can do, this but will give you the base configuration. You can now visit www.mysite.com on your public web server and it will serve up the site from your internal web server. You should see two rules show up; one inbound and one outbound. You can edit these, add conditions, and tweak them further as needed. One common issue that can occur without outbound rules has to do with compression. If you run into errors with the new proxied site, try turning off compression to confirm if that’s the issue. Here’s a link with details on how to deal with compression and outbound rules. I hope this was helpful to get started and to see how easy it is to create a simple reverse proxy using URL Rewrite for IIS.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Common reasons for the &lsquo;Sys is undefined&rsquo; error in ASP.NET Ajax applications

      In this blog I will try to summarize the most common reasons for getting the famous 'Sys is undefined' error when running an Ajax enabled web site or application (there are almost one milion results on Google for that phrase). Where does it come from? In every Ajax web pages source you will see a code like this: <script type="text/javascript"> //<![CDATA[ Sys.WebForms.PageRequestManager._initialize('ScriptManager1', document.getElementById('form1')); Sys.WebForms.PageRequestManager.getInstance()._updateControls([], [], [], 90); //]]> </script>   This is the initialization script of the ScriptManager. So, if for some reason the Sys namespace is not available when the code executes you get the Sys is undefined error. Here are the most common reasons and solutions for that problem:   1. The error occurs when you have added a control from RadControls for ASP.NET AJAX, but your application is not configured to use ASP.NET AJAX. For example, in VS 2005 you created a new Blank Site instead of a new Ajax-Enabled Web Site and the Sys is undefined message pops up. To fix it you need to follow the steps described at Configuring ASP.NET Ajax article (check the topic called Adding ASP.NET AJAX Configuration Elements to an Existing Web Site) or simply create the Ajax-Enabled Web Site. You can also check my other blog post on the matter: Visual Studio 2008: Where is the new ASP.NET Ajax-Enabled Web Site template?   2. Authentication - as the website denies access to all pages to unauthorized users, access to the Telerik.Web.UI.WebResource.axd handler is unauthorized (this is the default handler of RadScriptManager). This causes the handler to serve the content of the login page instead of the combined scripts, hence the error. To solve it - add a <location> section to the application configuration file to allow access to Telerik.Web.UI.WebResource.axd to all users, like: <configuration> ... <location path="Telerik.Web.UI.WebResource.axd"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location> ... </configuration>   Note that the access to the standard ScriptResource.axd and WebResource.axd is automatically allowed for all users (authenticated and unauthenticated), so if you use the ScriptManager instead of RadScriptManager - you will not face this problem. The authentication problem does not manifest when you disable script combining or use the CDN. Adding the above configuration section will make it work with RadScriptManagers combined script.   3. The IE6 browser fails to load the compressed script. The problem does not appear in any other browser. There is a well known bug in the older versions of IE6 which lose the first 2,048 bytes of data that are sent back from a Web server that uses HTTP compression. Latest versions of RadScriptManager does not compress the output at all if the client is IE6, but in the previous versions you need to manually disable the output compression to prevent the error. So, if you get the Sys is undefined error in IE6 - update to the latest version of RadControls or simply disable the output compression.   4. Requests to the *.axd files returns Error Code 404 - Not Found. This could  be fixed easily: Check in the IIS management console that the .axd extension (the default HTTP handler extension) is allowed:     Also check if the Verify if file exists checkbox is unchecked (click on the Edit button appearing in the previous screenshot to check). More information can be found in our troubleshooting article and from the ASP.NET QA team blog post   5. The virtual directory in IIS is not marked as Web Application. Converting it to Web Application should fix the problem.   6. Check for the <xhtmlConformance mode="Legacy"/> option in your web.config and remove it. It would be rather rare to become a victim of this exact case, but still have it in mind. Scott Guthrie describes it in more details   In the above points I mentioned several times the terms web resources, javascript output, compressed script. If you want to find out more about these please see the Web Resources Demystified series of my friend and colleague Atanas Korchev   I hope that one of the above solutions will help you get rid of the Sys is undefined error.   Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to remove a package entirely?

    - by maria
    Hi I'm quite new to Linux, but before using it I was hearing that Windows programs, after uninstallation, leaves a lot of remains on the hard disc, and Linux removes all. I'm using Ubuntu 10.04. To uninstall packages I'm using sudo apt-get autoremove application_name or sudo aptitude purge application_name. Recently I have installed texlive-full and for some reasons I had quickly to uninstall it. After I've entered to terminal updatedb, then locate *texlive* and the output was very long: maria@marysia-ubuntu:~$ locate *texlive* /etc/texmf/fmt.d/10texlive-base.cnf /etc/texmf/fmt.d/10texlive-formats-extra.cnf /etc/texmf/fmt.d/10texlive-lang-cyrillic.cnf /etc/texmf/fmt.d/10texlive-lang-czechslovak.cnf /etc/texmf/fmt.d/10texlive-lang-polish.cnf /etc/texmf/fmt.d/10texlive-latex-base.cnf /etc/texmf/fmt.d/10texlive-math-extra.cnf /etc/texmf/fmt.d/10texlive-metapost.cnf /etc/texmf/fmt.d/10texlive-omega.cnf /etc/texmf/fmt.d/10texlive-xetex.cnf /etc/texmf/hyphen.d/09texlive-base.cnf /etc/texmf/hyphen.d/10texlive-lang-arabic.cnf /etc/texmf/hyphen.d/10texlive-lang-croatian.cnf /etc/texmf/hyphen.d/10texlive-lang-cyrillic.cnf /etc/texmf/hyphen.d/10texlive-lang-czechslovak.cnf /etc/texmf/hyphen.d/10texlive-lang-danish.cnf /etc/texmf/hyphen.d/10texlive-lang-dutch.cnf /etc/texmf/hyphen.d/10texlive-lang-finnish.cnf /etc/texmf/hyphen.d/10texlive-lang-french.cnf /etc/texmf/hyphen.d/10texlive-lang-german.cnf /etc/texmf/hyphen.d/10texlive-lang-greek.cnf /etc/texmf/hyphen.d/10texlive-lang-hungarian.cnf /etc/texmf/hyphen.d/10texlive-lang-indic.cnf /etc/texmf/hyphen.d/10texlive-lang-italian.cnf /etc/texmf/hyphen.d/10texlive-lang-latin.cnf /etc/texmf/hyphen.d/10texlive-lang-latvian.cnf /etc/texmf/hyphen.d/10texlive-lang-lithuanian.cnf /etc/texmf/hyphen.d/10texlive-lang-mongolian.cnf /etc/texmf/hyphen.d/10texlive-lang-norwegian.cnf /etc/texmf/hyphen.d/10texlive-lang-other.cnf /etc/texmf/hyphen.d/10texlive-lang-polish.cnf /etc/texmf/hyphen.d/10texlive-lang-portuguese.cnf /etc/texmf/hyphen.d/10texlive-lang-spanish.cnf /etc/texmf/hyphen.d/10texlive-lang-swedish.cnf /etc/texmf/hyphen.d/10texlive-lang-ukenglish.cnf /etc/texmf/updmap.d/10texlive-base.cfg /etc/texmf/updmap.d/10texlive-fonts-extra.cfg /etc/texmf/updmap.d/10texlive-fonts-recommended.cfg /etc/texmf/updmap.d/10texlive-games.cfg /etc/texmf/updmap.d/10texlive-lang-african.cfg /etc/texmf/updmap.d/10texlive-lang-arabic.cfg /etc/texmf/updmap.d/10texlive-lang-cyrillic.cfg /etc/texmf/updmap.d/10texlive-lang-czechslovak.cfg /etc/texmf/updmap.d/10texlive-lang-french.cfg /etc/texmf/updmap.d/10texlive-lang-greek.cfg /etc/texmf/updmap.d/10texlive-lang-hebrew.cfg /etc/texmf/updmap.d/10texlive-lang-indic.cfg /etc/texmf/updmap.d/10texlive-lang-lithuanian.cfg /etc/texmf/updmap.d/10texlive-lang-mongolian.cfg /etc/texmf/updmap.d/10texlive-lang-polish.cfg /etc/texmf/updmap.d/10texlive-lang-vietnamese.cfg /etc/texmf/updmap.d/10texlive-latex-base.cfg /etc/texmf/updmap.d/10texlive-latex-extra.cfg /etc/texmf/updmap.d/10texlive-math-extra.cfg /etc/texmf/updmap.d/10texlive-omega.cfg /etc/texmf/updmap.d/10texlive-pictures.cfg /etc/texmf/updmap.d/10texlive-science.cfg /var/cache/apt/archives/texlive-base_2009-7_all.deb /var/cache/apt/archives/texlive-bibtex-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-binaries_2009-5ubuntu0.2_i386.deb /var/cache/apt/archives/texlive-common_2009-7_all.deb /var/cache/apt/archives/texlive-doc-base_2009-2_all.deb /var/cache/apt/archives/texlive-doc-bg_2009-2_all.deb /var/cache/apt/archives/texlive-doc-cs+sk_2009-2_all.deb /var/cache/apt/archives/texlive-doc-de_2009-2_all.deb /var/cache/apt/archives/texlive-doc-en_2009-2_all.deb /var/cache/apt/archives/texlive-doc-es_2009-2_all.deb /var/cache/apt/archives/texlive-doc-fi_2009-2_all.deb /var/cache/apt/archives/texlive-doc-fr_2009-2_all.deb /var/cache/apt/archives/texlive-doc-it_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ja_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ko_2009-2_all.deb /var/cache/apt/archives/texlive-doc-mn_2009-2_all.deb /var/cache/apt/archives/texlive-doc-nl_2009-2_all.deb /var/cache/apt/archives/texlive-doc-pl_2009-2_all.deb /var/cache/apt/archives/texlive-doc-pt_2009-2_all.deb /var/cache/apt/archives/texlive-doc-ru_2009-2_all.deb /var/cache/apt/archives/texlive-doc-si_2009-2_all.deb /var/cache/apt/archives/texlive-doc-th_2009-2_all.deb /var/cache/apt/archives/texlive-doc-tr_2009-2_all.deb /var/cache/apt/archives/texlive-doc-uk_2009-2_all.deb /var/cache/apt/archives/texlive-doc-vi_2009-2_all.deb /var/cache/apt/archives/texlive-doc-zh_2009-2_all.deb /var/cache/apt/archives/texlive-extra-utils_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-font-utils_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-extra-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-fonts-recommended-doc_2009-7_all.deb /var/cache/apt/archives/texlive-fonts-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-formats-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-full_2009-7_all.deb /var/cache/apt/archives/texlive-games_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-generic-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-generic-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-humanities-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-humanities_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-lang-african_2009-3_all.deb /var/cache/apt/archives/texlive-lang-arabic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-armenian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-croatian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-cyrillic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-czechslovak_2009-3_all.deb /var/cache/apt/archives/texlive-lang-danish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-dutch_2009-3_all.deb /var/cache/apt/archives/texlive-lang-finnish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-french_2009-3_all.deb /var/cache/apt/archives/texlive-lang-german_2009-3_all.deb /var/cache/apt/archives/texlive-lang-greek_2009-3_all.deb /var/cache/apt/archives/texlive-lang-hebrew_2009-3_all.deb /var/cache/apt/archives/texlive-lang-hungarian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-indic_2009-3_all.deb /var/cache/apt/archives/texlive-lang-italian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-latin_2009-3_all.deb /var/cache/apt/archives/texlive-lang-latvian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-lithuanian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-mongolian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-norwegian_2009-3_all.deb /var/cache/apt/archives/texlive-lang-other_2009-3_all.deb /var/cache/apt/archives/texlive-lang-polish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-portuguese_2009-3_all.deb /var/cache/apt/archives/texlive-lang-spanish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-swedish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-tibetan_2009-3_all.deb /var/cache/apt/archives/texlive-lang-ukenglish_2009-3_all.deb /var/cache/apt/archives/texlive-lang-vietnamese_2009-3_all.deb /var/cache/apt/archives/texlive-latex-base-doc_2009-7_all.deb /var/cache/apt/archives/texlive-latex-base_2009-7_all.deb /var/cache/apt/archives/texlive-latex-extra-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-latex-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-latex-recommended-doc_2009-7_all.deb /var/cache/apt/archives/texlive-latex-recommended_2009-7_all.deb /var/cache/apt/archives/texlive-latex3_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-luatex_2009-7_all.deb /var/cache/apt/archives/texlive-math-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-metapost-doc_2009-7_all.deb /var/cache/apt/archives/texlive-metapost_2009-7_all.deb /var/cache/apt/archives/texlive-music_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-omega_2009-7_all.deb /var/cache/apt/archives/texlive-pictures-doc_2009-7_all.deb /var/cache/apt/archives/texlive-pictures_2009-7_all.deb /var/cache/apt/archives/texlive-plain-extra_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-pstricks-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-pstricks_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-publishers-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-publishers_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-science-doc_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-science_2009-7ubuntu3_all.deb /var/cache/apt/archives/texlive-xetex_2009-7_all.deb /var/cache/apt/archives/texlive_2009-7_all.deb /var/lib/dpkg/info/texlive-base.list /var/lib/dpkg/info/texlive-base.postrm /var/lib/dpkg/info/texlive-bibtex-extra.list /var/lib/dpkg/info/texlive-bibtex-extra.postrm /var/lib/dpkg/info/texlive-doc-base.list /var/lib/dpkg/info/texlive-doc-base.postrm /var/lib/dpkg/info/texlive-doc-bg.list /var/lib/dpkg/info/texlive-doc-bg.postrm /var/lib/dpkg/info/texlive-doc-cs+sk.list /var/lib/dpkg/info/texlive-doc-cs+sk.postrm /var/lib/dpkg/info/texlive-doc-de.list /var/lib/dpkg/info/texlive-doc-de.postrm /var/lib/dpkg/info/texlive-doc-en.list /var/lib/dpkg/info/texlive-doc-en.postrm /var/lib/dpkg/info/texlive-doc-es.list /var/lib/dpkg/info/texlive-doc-es.postrm /var/lib/dpkg/info/texlive-doc-fi.list /var/lib/dpkg/info/texlive-doc-fi.postrm /var/lib/dpkg/info/texlive-doc-fr.list /var/lib/dpkg/info/texlive-doc-fr.postrm /var/lib/dpkg/info/texlive-doc-it.list /var/lib/dpkg/info/texlive-doc-it.postrm /var/lib/dpkg/info/texlive-doc-ja.list /var/lib/dpkg/info/texlive-doc-ja.postrm /var/lib/dpkg/info/texlive-doc-ko.list /var/lib/dpkg/info/texlive-doc-ko.postrm /var/lib/dpkg/info/texlive-doc-mn.list /var/lib/dpkg/info/texlive-doc-mn.postrm /var/lib/dpkg/info/texlive-doc-nl.list /var/lib/dpkg/info/texlive-doc-nl.postrm /var/lib/dpkg/info/texlive-doc-pl.list /var/lib/dpkg/info/texlive-doc-pl.postrm /var/lib/dpkg/info/texlive-doc-pt.list /var/lib/dpkg/info/texlive-doc-pt.postrm /var/lib/dpkg/info/texlive-doc-ru.list /var/lib/dpkg/info/texlive-doc-ru.postrm /var/lib/dpkg/info/texlive-doc-si.list /var/lib/dpkg/info/texlive-doc-si.postrm /var/lib/dpkg/info/texlive-doc-th.list /var/lib/dpkg/info/texlive-doc-th.postrm /var/lib/dpkg/info/texlive-doc-tr.list /var/lib/dpkg/info/texlive-doc-tr.postrm /var/lib/dpkg/info/texlive-doc-uk.list /var/lib/dpkg/info/texlive-doc-uk.postrm /var/lib/dpkg/info/texlive-doc-vi.list /var/lib/dpkg/info/texlive-doc-vi.postrm /var/lib/dpkg/info/texlive-doc-zh.list /var/lib/dpkg/info/texlive-doc-zh.postrm /var/lib/dpkg/info/texlive-extra-utils.list /var/lib/dpkg/info/texlive-extra-utils.postrm /var/lib/dpkg/info/texlive-font-utils.list /var/lib/dpkg/info/texlive-font-utils.postrm /var/lib/dpkg/info/texlive-fonts-extra-doc.list /var/lib/dpkg/info/texlive-fonts-extra-doc.postrm /var/lib/dpkg/info/texlive-fonts-extra.list /var/lib/dpkg/info/texlive-fonts-extra.postrm /var/lib/dpkg/info/texlive-fonts-recommended-doc.list /var/lib/dpkg/info/texlive-fonts-recommended-doc.postrm /var/lib/dpkg/info/texlive-fonts-recommended.list /var/lib/dpkg/info/texlive-fonts-recommended.postrm /var/lib/dpkg/info/texlive-formats-extra.list /var/lib/dpkg/info/texlive-formats-extra.postrm /var/lib/dpkg/info/texlive-games.list /var/lib/dpkg/info/texlive-games.postrm /var/lib/dpkg/info/texlive-generic-extra.list /var/lib/dpkg/info/texlive-generic-extra.postrm /var/lib/dpkg/info/texlive-generic-recommended.list /var/lib/dpkg/info/texlive-generic-recommended.postrm /var/lib/dpkg/info/texlive-humanities-doc.list /var/lib/dpkg/info/texlive-humanities-doc.postrm /var/lib/dpkg/info/texlive-humanities.list /var/lib/dpkg/info/texlive-humanities.postrm /var/lib/dpkg/info/texlive-lang-african.list /var/lib/dpkg/info/texlive-lang-african.postrm /var/lib/dpkg/info/texlive-lang-arabic.list /var/lib/dpkg/info/texlive-lang-arabic.postrm /var/lib/dpkg/info/texlive-lang-armenian.list /var/lib/dpkg/info/texlive-lang-armenian.postrm /var/lib/dpkg/info/texlive-lang-croatian.list /var/lib/dpkg/info/texlive-lang-croatian.postrm /var/lib/dpkg/info/texlive-lang-cyrillic.list /var/lib/dpkg/info/texlive-lang-cyrillic.postrm /var/lib/dpkg/info/texlive-lang-czechslovak.list /var/lib/dpkg/info/texlive-lang-czechslovak.postrm /var/lib/dpkg/info/texlive-lang-danish.list /var/lib/dpkg/info/texlive-lang-danish.postrm /var/lib/dpkg/info/texlive-lang-dutch.list /var/lib/dpkg/info/texlive-lang-dutch.postrm /var/lib/dpkg/info/texlive-lang-finnish.list /var/lib/dpkg/info/texlive-lang-finnish.postrm /var/lib/dpkg/info/texlive-lang-french.list /var/lib/dpkg/info/texlive-lang-french.postrm /var/lib/dpkg/info/texlive-lang-german.list /var/lib/dpkg/info/texlive-lang-german.postrm /var/lib/dpkg/info/texlive-lang-greek.list /var/lib/dpkg/info/texlive-lang-greek.postrm /var/lib/dpkg/info/texlive-lang-hebrew.list /var/lib/dpkg/info/texlive-lang-hebrew.postrm /var/lib/dpkg/info/texlive-lang-hungarian.list /var/lib/dpkg/info/texlive-lang-hungarian.postrm /var/lib/dpkg/info/texlive-lang-indic.list /var/lib/dpkg/info/texlive-lang-indic.postrm /var/lib/dpkg/info/texlive-lang-italian.list /var/lib/dpkg/info/texlive-lang-italian.postrm /var/lib/dpkg/info/texlive-lang-latin.list /var/lib/dpkg/info/texlive-lang-latin.postrm /var/lib/dpkg/info/texlive-lang-latvian.list /var/lib/dpkg/info/texlive-lang-latvian.postrm /var/lib/dpkg/info/texlive-lang-lithuanian.list /var/lib/dpkg/info/texlive-lang-lithuanian.postrm /var/lib/dpkg/info/texlive-lang-mongolian.list /var/lib/dpkg/info/texlive-lang-mongolian.postrm /var/lib/dpkg/info/texlive-lang-norwegian.list /var/lib/dpkg/info/texlive-lang-norwegian.postrm /var/lib/dpkg/info/texlive-lang-other.list /var/lib/dpkg/info/texlive-lang-other.postrm /var/lib/dpkg/info/texlive-lang-polish.list /var/lib/dpkg/info/texlive-lang-polish.postrm /var/lib/dpkg/info/texlive-lang-portuguese.list /var/lib/dpkg/info/texlive-lang-portuguese.postrm /var/lib/dpkg/info/texlive-lang-spanish.list /var/lib/dpkg/info/texlive-lang-spanish.postrm /var/lib/dpkg/info/texlive-lang-swedish.list /var/lib/dpkg/info/texlive-lang-swedish.postrm /var/lib/dpkg/info/texlive-lang-tibetan.list /var/lib/dpkg/info/texlive-lang-tibetan.postrm /var/lib/dpkg/info/texlive-lang-ukenglish.list /var/lib/dpkg/info/texlive-lang-ukenglish.postrm /var/lib/dpkg/info/texlive-lang-vietnamese.list /var/lib/dpkg/info/texlive-lang-vietnamese.postrm /var/lib/dpkg/info/texlive-latex-base-doc.list /var/lib/dpkg/info/texlive-latex-base-doc.postrm /var/lib/dpkg/info/texlive-latex-base.list /var/lib/dpkg/info/texlive-latex-base.postrm /var/lib/dpkg/info/texlive-latex-extra-doc.list /var/lib/dpkg/info/texlive-latex-extra-doc.postrm /var/lib/dpkg/info/texlive-latex-extra.list /var/lib/dpkg/info/texlive-latex-extra.postrm /var/lib/dpkg/info/texlive-latex-recommended-doc.list /var/lib/dpkg/info/texlive-latex-recommended-doc.postrm /var/lib/dpkg/info/texlive-latex-recommended.list /var/lib/dpkg/info/texlive-latex-recommended.postrm /var/lib/dpkg/info/texlive-latex3.list /var/lib/dpkg/info/texlive-latex3.postrm /var/lib/dpkg/info/texlive-luatex.list /var/lib/dpkg/info/texlive-luatex.postrm /var/lib/dpkg/info/texlive-math-extra.list /var/lib/dpkg/info/texlive-math-extra.postrm /var/lib/dpkg/info/texlive-metapost-doc.list /var/lib/dpkg/info/texlive-metapost-doc.postrm /var/lib/dpkg/info/texlive-metapost.list /var/lib/dpkg/info/texlive-metapost.postrm /var/lib/dpkg/info/texlive-music.list /var/lib/dpkg/info/texlive-music.postrm /var/lib/dpkg/info/texlive-omega.list /var/lib/dpkg/info/texlive-omega.postrm /var/lib/dpkg/info/texlive-pictures-doc.list /var/lib/dpkg/info/texlive-pictures-doc.postrm /var/lib/dpkg/info/texlive-pictures.list /var/lib/dpkg/info/texlive-pictures.postrm /var/lib/dpkg/info/texlive-plain-extra.list /var/lib/dpkg/info/texlive-plain-extra.postrm /var/lib/dpkg/info/texlive-pstricks-doc.list /var/lib/dpkg/info/texlive-pstricks-doc.postrm /var/lib/dpkg/info/texlive-pstricks.list /var/lib/dpkg/info/texlive-pstricks.postrm /var/lib/dpkg/info/texlive-publishers-doc.list /var/lib/dpkg/info/texlive-publishers-doc.postrm /var/lib/dpkg/info/texlive-publishers.list /var/lib/dpkg/info/texlive-publishers.postrm /var/lib/dpkg/info/texlive-science-doc.list /var/lib/dpkg/info/texlive-science-doc.postrm /var/lib/dpkg/info/texlive-science.list /var/lib/dpkg/info/texlive-science.postrm /var/lib/dpkg/info/texlive-xetex.list /var/lib/dpkg/info/texlive-xetex.postrm maria@marysia-ubuntu:~$ I've used sudo apt-get autoclean without any change. I've installed deborphan and it showed nothing (maybe I've used it in wrong way: just entered command deborphan). Am I doing something wrong or I was told something which is not true? I would like to know two things: how to remove packages (if I'm doing it in wrong way) and how to clean hard disc from remains of all packages I've uninstalled till now (even if I don't remember what it was exactly). I have Ubuntu Tweak installed but I don't know how to use it and I think I prefere terminal commnands. Thanks

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Cannot determine ethernet address for proxy ARP on PPTP

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 and i get this error also on PPTP server error log Cannot determine ethernet address for proxy ARP Ping from Client2 to Client1 PING 10.0.0.60 (10.0.0.60) 56(84) bytes of data. --- 10.0.0.60 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 5000ms route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables is stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • Varnish default.vcl grace period

    - by Vladimir
    These are my settings for a grace period (/etc/varnish/default.vcl) sub vcl_recv { .... set req.grace = 360000s; ... } sub vcl_fetch { ... set beresp.grace = 360000s; ... } I tested Varnish using localhost and nodejs as a server. I started localhost, the site was up. Then I disconnected server and the site got disconnected in less than 2 min. It says: Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 1890127100 Varnish cache server Could you tell me what could be the problem? sub vcl_fetch { if (beresp.ttl < 120s) { ##std.log("Adjusting TTL"); set beresp.ttl = 36000s; ##120s; } # Do not cache the object if the backend application does not want us to. if (beresp.http.Cache-Control ~ "(no-cache|no-store|private|must-revalidate)") { return(hit_for_pass); } # Do not cache the object if the status is not in the 200s if (beresp.status >= 300) { # Remove the Set-Cookie header #remove beresp.http.Set-Cookie; return(hit_for_pass); } # # Everything below here should be cached # # Remove the Set-Cookie header ####remove beresp.http.Set-Cookie; # Set the grace time ## set beresp.grace = 1s; //change this to minutes in case of app shutdown set beresp.grace = 360000s; ## 10 hour - reduce if it has negative impact # Static assets - browser caches tpiphem for a long time. if (req.url ~ "\.(css|js|.js|jpg|jpeg|gif|ico|png)\??\d*$") { /* Remove Expires from backend, it's not long enough */ unset beresp.http.expires; /* Set the clients TTL on this object */ set beresp.http.cache-control = "public, max-age=31536000"; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } else { set beresp.http.Cache-Control = "private, max-age=0, must-revalidate"; set beresp.http.Pragma = "no-cache"; } if (req.url ~ "\.(css|js|min|)\??\d*$") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ##do not duplicate these settings if (req.url ~ ".css") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".js") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".min") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ## If the request to the backend returns a code other than 200, restart the loop ## If the number of restarts reaches the value of the parameter max_restarts, ## the request will be error'ed. max_restarts defaults to 4. This prevents ## an eternal loop in the event that, e.g., the object does not exist at all. if (beresp.status != 200 && beresp.status != 403 && beresp.status != 404) { return(restart); } if (beresp.status == 302) { return(deliver); } # Never cache posts if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(hit_for_pass); } ##check this setting to ensure that it does not cause issues for browsers with no gzip if (beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } if (beresp.http.Set-Cookie) { return(deliver); } ##if (req.url == "/index.html") { set beresp.do_esi = true; ##} ## check if this is needed or should be used # return(deliver); the object return(deliver); } sub vcl_recv { ##avoid leeching of images call hot_link; set req.grace = 360000s; ##2m ## if one backend is down - use another if (req.restarts == 0) { set req.backend = cache_director; ##we can specify individual VMs } else if (req.restarts == 1) { set req.backend = cache_director; } ## post calls should not be cached - add cookie for these requests if using micro-caching # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); ## return(pass) goes to backend - not cache } # Don't cache the result of a redirect if (req.http.Referer ~ "redir" || req.http.Origin ~ "jumpto") { return(pass); } # Don't cache the result of a redirect (asking for logon) if (req.http.Referer ~ "post" || req.http.Referer ~ "submit" || req.http.Referer ~ "add" || req.http.Referer ~ "ask") { return(pass); } # Never cache posts - ensure that we do not use these strings in our URLs' that need to be cached if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(pass); } ## if (req.http.Authorization || req.http.Cookie) { if (req.http.Authorization) { /* Not cacheable by default */ return (pass); } # Handle compression correctly. Different browsers send different # "Accept-Encoding" headers, even though they mostly all support the same # compression mechanisms. By consolidating these compression headers into # a consistent format, we can reduce the size of the cache and get more hits. # @see: http:// varnish.projects.linpro.no/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|ico)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { # If the browser supports it, we'll use gzip. set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { # Next, try deflate if it is supported. set req.http.Accept-Encoding = "deflate"; } else { # Unknown algorithm. Remove it and send unencoded. unset req.http.Accept-Encoding; } } # lookup graphics, css, js & ico files in the cache if (req.url ~ "\.(png|gif|jpg|jpeg|css|.js|ico)$") { return(lookup); } ##added on 0918 - check if it causes issues with user specific content if (req.request == "GET" && req.http.cookie) { return(lookup); } # Pipe requests that are non-RFC2616 or CONNECT which is weird. if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { ##closing connection and calling pipe return(pipe); } ##purge content via localhost only if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } ## do we need this? ## return(lookup); }

    Read the article

  • How to use sudo with WinSCP and ProFTPd?

    - by Gaia
    I need to run the SFTP fileserver binary as root, but direct root login is not allowed. In WinSCP, if I use "default" on SFTP server protocol option everything works as expected. Following the instructions to sudo in WinSCP, I tried using "sudo /usr/sbin/proftpd" (works on the command line without any prompts) but it brings up "Cannot initialize SFTP protocol. Is the host running a SFTP server?" How to use sudo with WinSCP and ProFTPd? WinSCP 4.3.7 GUI Protocol: SFTP-3 CentOS 6.2 Webmin/Virtualmin (Current Version) PS: only cert based login is allowed . 2012-06-17 11:05:56.998 -------------------------------------------------------------------------- . 2012-06-17 11:05:56.998 WinSCP Version 4.3.7 (Build 1679) (OS 6.1.7601 Service Pack 1) . 2012-06-17 11:05:56.998 Configuration: HKEY_CURRENT_USER\Software\Martin Prikryl\WinSCP 2\ . 2012-06-17 11:05:56.999 Login time: Sunday, June 17, 2012 11:05:56 AM . 2012-06-17 11:05:56.999 -------------------------------------------------------------------------- . 2012-06-17 11:05:56.999 Session name: KVM1 (Modified stored session) . 2012-06-17 11:05:57.047 Host name: mykvm.com (Port: 22) . 2012-06-17 11:05:57.048 User name: adminuser (Password: No, Key file: Yes) . 2012-06-17 11:05:57.048 Tunnel: No . 2012-06-17 11:05:57.048 Transfer Protocol: SFTP (SCP) . 2012-06-17 11:05:57.048 Ping type: -, Ping interval: 30 sec; Timeout: 15 sec . 2012-06-17 11:05:57.048 Proxy: none . 2012-06-17 11:05:57.048 SSH protocol version: 2; Compression: Yes . 2012-06-17 11:05:57.048 Bypass authentication: No . 2012-06-17 11:05:57.048 Try agent: Yes; Agent forwarding: No; TIS/CryptoCard: No; KI: Yes; GSSAPI: No . 2012-06-17 11:05:57.048 Ciphers: aes,blowfish,3des,WARN,arcfour,des; Ssh2DES: No . 2012-06-17 11:05:57.048 SSH Bugs: -,-,-,-,-,-,-,-,- . 2012-06-17 11:05:57.048 SFTP Bugs: -,- . 2012-06-17 11:05:57.048 Return code variable: Autodetect; Lookup user groups: Yes . 2012-06-17 11:05:57.048 Shell: default . 2012-06-17 11:05:57.048 EOL: 0, UTF: 2 . 2012-06-17 11:05:57.048 Clear aliases: Yes, Unset nat.vars: Yes, Resolve symlinks: Yes . 2012-06-17 11:05:57.048 LS: ls -la, Ign LS warn: Yes, Scp1 Comp: No . 2012-06-17 11:05:57.048 Local directory: default, Remote directory: home, Update: No, Cache: Yes . 2012-06-17 11:05:57.048 Cache directory changes: Yes, Permanent: Yes . 2012-06-17 11:05:57.048 DST mode: 1 . 2012-06-17 11:05:57.048 -------------------------------------------------------------------------- . 2012-06-17 11:05:57.113 Looking up host "mykvm.com" . 2012-06-17 11:05:57.132 Connecting to xxx.xxx.128.59 port 22 . 2012-06-17 11:05:57.499 Server version: SSH-2.0-OpenSSH_5.3 . 2012-06-17 11:05:57.499 Using SSH protocol version 2 . 2012-06-17 11:05:57.499 We claim version: SSH-2.0-WinSCP_release_4.3.7 . 2012-06-17 11:05:57.679 Server supports delayed compression; will try this later . 2012-06-17 11:05:57.679 Doing Diffie-Hellman group exchange . 2012-06-17 11:05:58.077 Doing Diffie-Hellman key exchange with hash SHA-1 . 2012-06-17 11:05:58.498 Host key fingerprint is: . 2012-06-17 11:05:58.498 ssh-rsa 2048 bd:e4:34:b1:d4:69:d6:4e:e4:26:04:8b:b7:b3:de:c3 . 2012-06-17 11:05:58.498 Initialised AES-256 SDCTR client->server encryption . 2012-06-17 11:05:58.498 Initialised HMAC-SHA1 client->server MAC algorithm . 2012-06-17 11:05:58.498 Initialised AES-256 SDCTR server->client encryption . 2012-06-17 11:05:58.498 Initialised HMAC-SHA1 server->client MAC algorithm . 2012-06-17 11:05:58.922 Reading private key file "D:\id_rsa.ppk" ! 2012-06-17 11:05:58.924 Using username "adminuser". . 2012-06-17 11:05:59.550 Offered public key . 2012-06-17 11:05:59.743 Offer of public key accepted ! 2012-06-17 11:05:59.743 Authenticating with public key "masterkey for admin" . 2012-06-17 11:05:59.764 Prompt (3, SSH key passphrase, , Passphrase for key "masterkey for admin": ) . 2012-06-17 11:06:02.938 Sent public key signature . 2012-06-17 11:06:03.352 Access granted . 2012-06-17 11:06:03.352 Initiating key re-exchange (enabling delayed compression) . 2012-06-17 11:06:03.765 Doing Diffie-Hellman group exchange . 2012-06-17 11:06:03.955 Doing Diffie-Hellman key exchange with hash SHA-1 . 2012-06-17 11:06:04.410 Initialised AES-256 SDCTR client->server encryption . 2012-06-17 11:06:04.410 Initialised HMAC-SHA1 client->server MAC algorithm . 2012-06-17 11:06:04.410 Initialised zlib (RFC1950) compression . 2012-06-17 11:06:04.410 Initialised AES-256 SDCTR server->client encryption . 2012-06-17 11:06:04.410 Initialised HMAC-SHA1 server->client MAC algorithm . 2012-06-17 11:06:04.410 Initialised zlib (RFC1950) decompression . 2012-06-17 11:06:04.839 Opened channel for session . 2012-06-17 11:06:05.247 Started a shell/command . 2012-06-17 11:06:05.253 -------------------------------------------------------------------------- . 2012-06-17 11:06:05.253 Using SFTP protocol. . 2012-06-17 11:06:05.253 Doing startup conversation with host. > 2012-06-17 11:06:05.259 Type: SSH_FXP_INIT, Size: 5, Number: -1 . 2012-06-17 11:06:05.354 Server sent command exit status 0 . 2012-06-17 11:06:05.354 Disconnected: All channels closed * 2012-06-17 11:06:05.380 (ESshFatal) Connection has been unexpectedly closed. Server sent command exit status 0. * 2012-06-17 11:06:05.380 Cannot initialize SFTP protocol. Is the host running a SFTP server?

    Read the article

  • PPTP ping client to client error

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables are stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • Adaptive Threshold in OpenCV (Version 1 - the swig version)

    - by Neil Benn
    Hello I'm trying to get adaptive thresholding working in the python binding to opencv (the swig one - cannot get opencv 2.0 working as I am using a beagleboard as the cross compiling is not working yet). I have a greyscale image (ccg.jpg) and the following code import opencv from opencv import highgui img = highgui.cvLoadImage("ccg.png") img_bw = opencv.cvCreateImage(opencv.cvGetSize(img), opencv.IPL_DEPTH_8U, 1) opencv.cvAdaptiveThreshold(img, img_bw, 125, opencv.CV_ADAPTIVE_THRESH_MEAN_C, opencv.CV_THRESH_BINARY, 7, 10) When I run this I get the error: RuntimeError: openCV Error: Status=Formats of input arguments do not match function name=cvAdaptiveThreshold error messgae= file_name=cvadapthresh.cpp line=122 I've also tried having both the source and dest arguments both the same (greyscale) and I get the error 'Unsupported format or combination of formats'. Does anyone have any clues as to where I could be going wrong? Cheers, Neil

    Read the article

  • Mac Quick Look Preview in an NSView or NSImage?

    - by Niko Matsakis
    I am looking for a way (public or otherwise) to get an NSView, NSImage, CGImageRef, etc that shows the QuickLook preview for a file. Essentially the equivalent of QLThumbnailImageCreate() but for the preview. The public APIs I can find do not support this. They allow the creation of a thumbnail image or a QLPreviewPanel. The panel does in fact display the quick look preview, but I cannot get access to the preview's appearance to embed it in other views, nor can I display multiple previews at once. For background, I am writing an app where users can embed links to other files that should be displayed inline, kind of like an <img> tag in HTML. For images like JPGs and PDFs it's easy to figure out what to display. I thought that for other formats I would use Quick Look to generate a nice visual representation of the file's contents. This way the set of formats supported by my application would be easily extensible (just download new Quick Look generators).

    Read the article

  • Mono mkbundle issue

    - by Sean
    Hello, I am trying to bundle mono runtime with a C# a simple (console) program that does nothing, but 'hello world'. Got Cygwin, configured it all, though it fails: $ mkbundle -o x2 x.exe --deps -z OS is: Windows Sources: 1 Auto-dependencies: True embedding: C:\cygwin\home\Sean\x.exe compression ratio: 31.71% embedding: C:\PROGRA~2\MONO-2~1.1\lib\mono\4.0\mscorlib.dll compression ratio: 34.68% Compiling: as -o temp.o temp.s gcc -mno-cygwin -g -o x2 -Wall temp.c `pkg-config --cflags --libs mono-2|dos2unix` -lz temp.o temp.c: In function `main': temp.c:173: warning: implicit declaration of function `g_utf16_to_utf8' temp.c:173: warning: assignment makes pointer from integer without a cast temp.c:188: warning: assignment makes pointer from integer without a cast /tmp/ccu8fTcQ.o: In function `main': /home/Sean/temp.c:173: undefined reference to `_g_utf16_to_utf8' /home/Sean/temp.c:188: undefined reference to `_g_utf16_to_utf8' collect2: ld returned 1 exit status [Fail] Tried post with similar problem to mine here, but it didn't work either. My configuration: Windows 7 64bit Mono 2.8.1 Latest Cygwin Not sure what's the story here. Help appreciated.

    Read the article

  • How to test localized winforms application?

    - by Sun
    Background: I have created sample windows application for learning to implement localization. My each form has two RESX file. One for Bulgaria and one for French(Belgium). It has default culture English(XX) To test locally currently I am programmatically changing UICulture i.e. Thread.CurrentThread.CurrentUICulture = new CultureInfo("fr-BE"); And it works fine. Problem: How can I test without forcing the UI Culture programatically? I have tried changing Control Panel Regional Options Standard and Formats to French(Belgium). That made changes to DatePickerControl and now it display the dates French. However I can still see the Button text in English, where as if I test by injecting CultureInfo programmatically it changes to french. If I am able to change the standard and formats to French do I still need to Install Multilingual User Interface Pack (MUI)?

    Read the article

  • Audio Conversion C#

    - by Will
    What is the best way to convert various audio formats to PCM? For example: mp3, evrc, ogg vox. Is there a library out there that will allow me to implement this relatively easily? EDIT: I guess my initial question wasn't really what I needed. Most of the libs I have found are file converters. What I need is a block converter, where I pass in a 1Kb block of vox data and it returns its converted PCM block. Of course I’ll have to tell the converter what type of data it is and various pieces of codec information. The solution I am going for is to save and VOIP formats into a common wav format and to play that conformed file in real time. I thought there should be an easy way to do this because all audio is eventually turned into PCM before it is outputted anyways.

    Read the article

  • How to pass -f specdoc option through rake task

    - by dorelal
    I am using rails 2.3.5 .rake spec works fine. This is from spec --help. spec --help -f, --format FORMAT[:WHERE] Specifies what format to use for output. Specify WHERE to tell the formatter where to write the output. All built-in formats expect WHERE to be a file name, and will write to $stdout if it's not specified. The --format option may be specified several times if you want several outputs Builtin formats: silent|l : No output progress|p : Text-based progress bar profile|o : Text-based progress bar with profiling of 10 slowest examples specdoc|s : Code example doc strings nested|n : Code example doc strings with nested groups indented html|h : A nice HTML report failing_examples|e : Write all failing examples - input for --example failing_example_groups|g : Write all failing example groups - input for --example How do I pass -f specdoc through rake task.

    Read the article

  • Parse M3U file locations to fully qualified paths

    - by Guy
    I would like to parse the file location information in an M3U playlist into fully qualified paths. The possible formats in M3U files seem to be: c:\mydir\songs\tune.mp3 \songs\tune.mp3 ..\songs\tune.mp3 For the first example, just leave it alone. For the second add the directory that the playlist resides in so it would become c:\playlists\songs\tune.mp3 and the same for the third case so it would also become: c:\playlists\songs\tune.mp3. I'm using vb under VS2008 and I can't find a way to recognise each of the potential location formats in the M3U file. System.IO.Path offers no solution that I can find. I've searched extensively for terms like "convert relative path to absolute" but no luck. Any advice appreciated. Thanks.

    Read the article

  • How to parse date from string ?

    - by Harikrishna
    I want to parse the date from the string where date formate can be any of different format. Now to match date we can use DateTime.TryParseExact and we can define format as we needed and date will be matched for any different format. string[] formats = {"MMM dd yyyy"}; DateTime dateValue; string dateString = "May 26 2008"; if (DateTime.TryParseExact(dateString, formats, new CultureInfo("en-US"), DateTimeStyles.None, out dateValue)) MessageBox.Show(dateValue.ToString()); This matches with date.But this is not working for parse the date from the string that is it does not matched with the date which is in some string. Like if the date is "May 26 2008" then we can define format "MMM dd yyyy" and date will be matched. But if date is in some string like "Abc May 26 2008" then date will not be matched.So for that can we use regular expression here ? If yes how ?

    Read the article

  • Python File Meta Tag reading

    - by Jeff
    Anyone know of a Python module that can pull Tag data from multiple media formats? Trying to build an app that allows for manipulation of ASF (Windows Media Player files, ie WMA, WMV, etc), ID3, including both ID3v1 and ID3v2 (MPEG files, ie MP3), MPEG Audio Bit Stream (ie ABS, MP1, MP2, MP3), MPEG Program Stream (MPEG movies, and DVD and HD DVD video discs, ie MPG, MPEG, VOB, EVO), and ISO Base Media File Format (eg QuickTime, MPEG-4 and iTunes AAC files, ie QT, MOV, MP4, M4A, M4B, M4P, M4V, etc). Don't need ALL of that but just most standard consumer formats like mov and mpeg. I can't seem to find a good module to support that or a library. Any recommendations?

    Read the article

  • Howto specify format of Restlet-response in browser?

    - by martin
    Hello everybody, i've started to introduce myself into REST. I use as REST-framework Restlet. I have defined a resource with methods for the GET with several response formats like @Get("xml") @Get("json") I now want to test my defined response-formats with my browser, but I don't know which parameter I have to specify in my URL to get the format. Something like: http://localhost:8182/members?type=xml I've tried some param-names, but I couldn't find the right param-name. I know that there must be such a parameter, because I've seen it already in an URL, but i forgot the name and couldn't find it in the net. How is the name of this parameter when using restlet? I would be pleased, if somebody can help me, thanks, Martin

    Read the article

  • Accessing the feed/entry/id field of an ATOM 1.0 feed with the ROME library

    - by PartlyCloudy
    Hi, I feel a bit stupid asking this question, but I don't know how I can access the ID field of an entry when using ROME to parse an Atom feed. ROME provides it's own meta level of feeds/items, i.e. SyndFeed and SyndEntry. Being an abstraction over RSS and ATOM they only contain elements both formats support. Thus, there is no method to get an ID of an entry. There also exist low level packages for the distinct formats, and the Atom package contains com.sun.syndication.feed.atom.Entry, which provides getId(). However, I don't know how can I convert my SyndEntry into an Entry. I have not found a way to convert it. The (outdated) tutorials show a conversion, but that's only for output though. So how can I easily access the ID field? Thanks in advance.

    Read the article

  • Parsing a DateTime String into Custom Date and Time Format String

    - by AMissico
    With .NET, I have "Thursday, April 10, 2008 1:30:00 PM" and I want "dddd, dd MMMM, yyyy h:m:s t", "6:09:01 PM" and want ""hh:mm:ss tt", "Fri 29 Aug" and want "ddd d MMM", and so on. It seems I should be able to use DateTimeFormatInfo in some way. I figured I can format the date with each pattern returned by GetAllDateTimePatterns, and when the original date string and the formated date string match then I have the format. Yet, I want to generate custom formats, not the standard formats. I want the format string. I do not want the date. I have both the DateTime value and the formatted string value for the date. I need <formatString> as in ToString(<formatString>).

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >