Search Results

Search found 9449 results on 378 pages for 'big marc'.

Page 26/378 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • A big flat text file or a HTML site for language documentation?

    - by Bad Sector
    A project of mine is a small embeddable Tcl-like scripting language, LIL. While i'm mostly making it for my own use, i think it is interesting enough for others to use, so i want it to have a nice (but not very "wordy") documentation. So far i'm using a single flat readme.txt file. It explains the language's syntax, features, standard functions, how to use the C API, etc. Also it is easy to scan and read in almost every environment out there, from basic text-only terminals to full-fledged high-end graphical desktop environments. However, while i tried to keep things nicely formatted (as much as this is possible in plain text), i still think that being a big (and growing) wall of text, it isn't as easy on the eyes as it could be. Also i feel that sometimes i'm not writing as much as i want in order to avoid expanding the text too much. So i thought i could use another project of mine, QuHelp, which is basically a help site generator for sites like this one with a sidebar that provides a tree of topics/subtopics and offline full text search. With this i can use HTML to format the documentation and if i use QuHelp for some other project that uses LIL, i can import LIL's documentation as part of the other project's documentation. However converting the existing documentation to QuHelp/HTML isn't a small task, especially when it comes to functions (i'll need to put more detail on them than what currently exists in the readme.txt file). Also it loses the wide range of availability that it currently has (even if QuHelp's generated code degrades gracefully down to console-only web browsers, plain text is readable from everywhere, including from popular editors such as Vim and Emacs - i had someone once telling me that he likes LIL's documentation because it is readable without leaving his editor). So, my question is simply this: should i keep the documentation as it is now in the form of a single readme.txt file or should i convert it to something like the site i mentioned above? There is also the option to do both, but i'm not sure if i'll be able to always keep them in sync or if it is worth the effort. After asking around in IRC i've got mixed answers: some liked the wide availability of the single text file, others said that it is looks as bad as a man page (personally i don't mind that - i can read man pages just fine - but other people might have issues reading them). What do you think?

    Read the article

  • How to better copy&paste big files over RDP?

    - by WebMAOhist
    Recently I was making a few attempts to copy&paste a big (1.2 GB) file to remote computer over RDP. The remote computer is virtual testing machine with MS Windows Server 2008 Datacenter. First I tried to copy&paste before midnight when the transfer speed was limited by client computer ISP to 100 kB/s. So, it required a few hours and I was forced to cancel transfer since remote desktop became too unresponsive and sluggish (slow). So, I re-started it over midnight when my local transfer speed is over 4 GB/s 4MB/s (sorry for typo). So, my impression is that independently on speed (broadband) of copy&paste transfer the remote computer becomes sluggish while copying over RDP. At the same time downloading from internet doesn't make remote host sluggish. AFAIU, it is because clipboard of remote computer and so its memory becomes overloaded by transfer. How can I control (restrict) the usage of clipboard for specific process (pasting of file)? What are the possible way to control it? Update: After reading that slow speed of transfer is caused by encryption used for copy&pasting over RDP and since I believe I am more interested in overall efficiency: both the time, or rapidness, of getting file as well as possibility to work without waiting, I changed the question title from: How to control the usage of remote desktop clipboard usage for pasting a big file? to How to better copy&paste big files over RDP? For example, is it better to copy&paste one huge (zip) archive or unzip it and copy paste a folder with unzipped files? And more exactly I wanted to ask: What are possible ways to improve overall experience: the speed of transfer (i.e. availability of needed file) responsiveness of remote host (making remote coputer available for work before completion of copy&pasting)?

    Read the article

  • What is a good way to comment if-else-clauses?

    - by acme
    Whenever I'm writing a typical if-else-construct in any language I wonder what would be the best way (in terms of readability and overview) to add comments to it. Especially when commenting the else clause the comments always feel out-of-place for me. Say we have a construct like this (examples are written down in PHP): if ($big == true) { bigMagic(); } else { smallMagic() } I could comment it like this: // check, what kind of magic should happen if ($big == true) { // do some big magic stuff bigMagic(); } else { // small magic is enough smallMagic() } or // check, what kind of magic should happen // do some big magic stuff if ($big == true) { bigMagic(); } // small magic is enough else { smallMagic() } or // check, what kind of magic should happen // if: do some big magic stuff // else: small magic is enough if ($big == true) { bigMagic(); } else { smallMagic() } What are your best-practice examples for commenting this?

    Read the article

  • Is really good have a big (500GB/1T) Hard Disk as main one!?

    - by aSeptik
    Hi All guys! ;-) i'm building my new PC so i'm starting the usual search for a good hardware/price around the net! but this time i'm not sure i want buy a big Hard Disk! i was thinking to have a "very small" HD like 50GB as main one and external (big) for store all other stuff! Assuming i'm using classic slow softwares like adobe suite (photoshop, flash, autodesk) and some very simple soft like notepad, php and so on...! do you think this is a good practice for improve performance/speed or i'm jast saying some stupid thing!?

    Read the article

  • Is really good have a big (500GB/1T) Hard Disk as main one!?

    - by aSeptik
    Hi All guys! ;-) i'm building my new PC so i'm starting the usual search for a good hardware/price around the net! but this time i'm not sure i want buy a big Hard Disk! i was thinking to have a "very small" HD like 50GB as main one and external (big) for store all other stuff! Assuming i'm using classic slow softwares like adobe suite (photoshop, flash, autodesk) and some very simple soft like notepad, php and so on...! do you think this is a good practice for improve performance/speed or i'm jast saying some stupid thing!?

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • How to Split a Big Postscript file (3000 pages) into one individual file per page (using Windows 7)?

    - by Pablo
    Hi, I'm having trouble doing the following: I have a big PDF file that I converted to postscript (for commercial printing). The resulting file is too big to be processed by the printer (machine). I've been trying to find a way to either: Convert from the original (many pages) PDF file to many Postscript file (one postcript file per PDF page in original PDF file(. Convert from PDF to PS (or even EPS). - I managed to do this Then split the PS file into a collection of smaller files. I've tried using Ghostscript, but it is all gibberish to me. Thanks. PS. If you have a good GS tutorial (for dummies?), please share the link.

    Read the article

  • Big Success for the 2012 edition of the Oracle EMEA Cloud CRM Partner Community Forum!

    - by Richard Lefebvre
    The 2012 edition of the Oracle EMEA Cloud Partner Community Forum took place on march 28&29 in Madrid. 100 participants from all over Europe had a chance to interact with the Oracle Cloud CRM Product Management team about multiple subjects such as Oracle Cloud CRM and Social Network solutions strategy, RightNow acquisition, Fusion CRM Business opportunities for partners, etc. During his opening keynote, Anthony Lye (Oracle Senior Vice-President and head of Oracle CRM) presented the current Fusion CRM business status, disclosed the overall Oracle CRM product strategy and responded to many questions from the audience. Later on that day, 8 Oracle ISV's presented their Oracle Cloud CRM add-on's and highlighted the value that System Integrators can benefit from as part of a Cloud CRM project. After a very friendly networking diner in a Spanish restaurant, the second day was dedicated to Fusion CRM, with a deeper dive into its major components (Sales, Sales Planning, Marketing) including the Fusion Composers. Briefings on Oracle Consulting Services Fusion CRM dedicated offerings and Fusion CRM Partner Programs concluded the day and the event. All participants rated the event as "good" to "Excellent" and mentioned that it was meaningful for them to plan their Oracle Cloud CRM based business in the near future. We look forward to organise a similar event next year and to welcome even more partners! Richard Lefebvre

    Read the article

  • Donald Ferguson says end-user programming is next big thing. Is it?

    - by Joris Meys
    You can guess how I came to ask this question... Anyway : http://www.bbc.co.uk/news/business-11944966 Donald Ferguson claiming that his websphere was his biggest disaster and proclaiming that end-user programming will be the way forward. This genuinely spurs the question : what with current programming languages. Honestly, I don't think that end-user programming will go much beyond a rather rigid template where you can build some apps around. If you see how many people actually manage to understand the basic functionality of functions in EXCEL... Plus, I fail to see how complex and performant systems can be built in such an end-user programming language ( Visual Basic, anyone?) Nice to play around with, but for many applications they're just not the thing. So no worries for the old languages if you ask me. What's your ideas?

    Read the article

  • How to efficiently preserve a really big navigation history in Firefox?

    - by brandizzi
    When I was using Mac, my Firefox stored items in its history for really long times. Sometimes I needed to find a link to a site I have seen two years ago and it found it! Also, the autocomplete in the Firefox bar is really great, so a long history and the autocompleting yield a wonderful feature to me. Unfortunately, it seems this does not happen in Ubuntu's Firefox. I looked for solutions but I just got some Firefox developers saying the option of expanding history is out for performance issues and one is well advised to not try to change it (which read to me as saying "we cannot make it work well so we limit the scope"). Anyway, my question is: is there a way of efficiently expand the size of Firefox history? Sorry for my bitterness, but a solution with strings attached (mostly say that I should not do it, like this addon) is not solution for me. Does someone have the same need of mine and found a solution?

    Read the article

  • Why do large IT projects tend to fail or have big cost/schedule overruns?

    - by Pratik
    I always read about large scale transformation or integration project that are total or almost total disaster. Even if they somehow manage to succeed the cost and schedule blow out is enormous. What is the real reason behind large projects being more prone to failure. Can agile be used in these sort of projects or traditional approach is still the best. One example from Australia is the Queensland Payroll project where they changed test success criteria to deliver the project. See some more failed projects in this SO question Have you got any personal experience to share?

    Read the article

  • SQLAuthority News – Download Whitepaper – SQL Server Analysis Services to Hive

    - by pinaldave
    The SQL Server Analysis Service is a very interesting subject and I always have enjoyed learning about it. You can read my earlier article over here. Big Data is my new interest and I have been exploring it recently. During this weekend this blog post caught my attention and I enjoyed reading it. Big Data is the next big thing. The growth is predicted to be 60% per year till 2016. There is no single solution to the growing need of the big data available in the market right now as well there is no one solution in the business intelligence eco-system available as well. However, the need of the solution is ever increasing. I am personally Klout user. You can see my Klout profile over. I do understand what Klout is trying to achieve – a single place to measure the influence of the person. However, it works a bit mysteriously. There are plenty of social media available currently in the internet world. The biggest problem all the social media faces is that everybody opens an account but hardly people logs back in. To overcome this issue and have returned visitors Klout has come up with the system where visitors can give 5/10 K+ to other users in a particular area. Looking at all the activities Klout is doing it is indeed big consumer of the Big Data as well it is early adopter of the big data and Hadoop based system.  Klout has to 1 trillion rows of data to be analyzed as well have nearly thousand terabyte warehouse. Hive the language used for Big Data supports Ad-Hoc Queries using HiveQL there are always better solutions. The alternate solution would be using SQL Server Analysis Services (SSAS) along with HiveQL. As there is no direct method to achieve there are few common workarounds already in place. A new ODBC driver from Klout has broken through the limitation and SQL Server Relation Engine can be used as an intermediate stage before SSAS. In this white paper the same solutions have been discussed in the depth. The white paper discusses following important concepts. The Klout Big Data solution Big Data Analytics based on Analysis Services Hadoop/Hive and Analysis Services integration Limitations of direct connectivity Pass-through queries to linked servers Best practices and lessons learned This white paper discussed all the important concepts which have enabled Klout to go go to the next level with all the offerings as well helped efficiency by offering a few out of the box solutions. I personally enjoy reading this white paper and I encourage all of you to do so. SQL Server Analysis Services to Hive Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Publish/Subscribe/Request for exchange of big, complex, and confidential data?

    - by Morten
    I am working on a project where a website needs to exchange complex and confidential (and thus encrypted) data with other systems. The data includes personal information, technical drawings, public documents etc. We would prefer to avoid the Request-Reply pattern to the dependent systems (and there are a LOT of them), as that would create an awful lot of empty traffic. On the other hand, I am not sure that a pure Publisher/Subscriber pattern would be apropriate -- mainly because of the complex and bulky nature of the data to be exchanged. For that reason we have discussed the possibility of a "publish/subscribe/request" solution. The Publish/Subscribe part would be to publish a message to the dependent systems, that something is ready for pickup. The actual content is then picked up by old-school Request-Reply action. How does this sound to you?? Regards, Morten

    Read the article

  • Save BIG on Storage &mdash; with Oracle Advanced Compression

    - by [email protected]
    Recently, we published a podcast revealing just how much Oracle benefits from its internal use of Oracle Database 11g and Advanced Compression. With hundreds of TB and millions of dollars saved, Oracle Advanced Compression is dramatically reducing storage costs and substantially improving efficiency across the company. Now, here's your chance: Meet the experts, have your questions answered by them and immediately start using your storage more efficiently: On April 14th, join me for a live Webcast with Oracle's Tim Shetler, Vice President of Product Management and Bill Hodak, Principal Product Manager, to learn just how Oracle Advanced Compression can Reduce disk space requirements for all types of data Improve query and storage performance Lower storage costs throughout the datacenter Register here! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Why is implementing copy-paste in a touch screen based smartphone such a big deal?

    - by EpsilonVector
    I'm not entirely sure this is on-topic, but it definitely needs a programmer's understanding to be answered, and deals with general development (for a specific scenario) as opposed to a specific piece of code. In a way it also translates into "what are the challenges in doing X in a touch screen app", and similar questions have been asked here in the past. So here it is: When Apple didn't implement copy-pasting on the iPhone since version 1 I just assumed it was a UI issue- they were waiting until they figured out a good UI for it. But now the idea is out there, and Microsoft still released Windows Phone 7 without copy-pasting, promising it'll be ready in a few months. My question is: why does this takes a few months to implement? Are there some technological challenges that are unique to programming for a touch screen that I'm not familiar with?

    Read the article

  • Temporary background image while the big one is loading? [migrated]

    - by Mikhail
    Is there a way, without javascript, to load a small image for a background before the real image is downloaded? Without javascript because I know how to do it with it. I can't test if the following CSS3 would work because it works too quick: body { background-image:url('hugefile.jpg'), url('tinypreload.jpg'); } If the tinypreload.jpg is only, say 20k, and the hugefile.jpg is 300k -- would this accomplish the task? I assume that both downloads would start at the same time instead of being consecutive.

    Read the article

  • Grid Style. Is It The Next Big Thing In Web Design?

    Inspired by typographic magazine layout grids more and more web designers are starting to embrace grid-based design siting cleaner and more easily digestible web pages as a benefit. The concept of ... [Author: Michiel Van Kets - Web Design and Development - June 17, 2010]

    Read the article

  • How important is a big-name school for fresh grads?

    - by Fishtoaster
    How important, if at all, is what school you go to towards your job prospects coming out of college? That is, how much difference is there between how a graduate of MIT/Stanford/etc treated vs RIT vs Monroe Community College vs Joe's Discount Diploma shop? I'm asking specifically outside of the education itself, and more towards the perceived value of your degree to prospective employers. What do you think?

    Read the article

  • Which of these URL scenarios is best for big link menus? [seo /user friendly urls]

    - by Sam
    Hi folks, a question about urls... me and a good friend of mine are exploring the possibilities of either of the three scenarios for a website where each webpage has a menusystem with about 130 links.: SCENARIO 1 the pages menu system has SHORT non-descriptive hyperlinks as well as a SHORT canonical: <a href:"design">dutch design</a> the pages canonical url points to e.g.: "design" OR SCENARIO 2 the pages menu system has SHORT non-descriptive hyperlinks wwith LONG canonical urls: <a href="design">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest OR SCENARIO 3 the pages menu system has LONG descriptive hyperlinks with LONG canonical urls: <a href="dutch-design-crazy-yes-but-always-honest">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest Currently we have scenario 2... should we progress to scenario 3? All three work fine and point via RewriteMod to the same page which is fetched underwater. Now, my question is which of these is better in terms of: userfriendlyness (page loading times, full url visible in url bar or not) seo friendlyness (proper indexing due to the urls containing descriptive relevant tags) other concerns we forgot like possible penalties for so many words in link hrefs?? Thanks very much for your suggestions: much appreciated!

    Read the article

  • Rules of Holes #4 -Do You Have the BIG Picture?

    - by ArnieRowland
    Some folks decry the concept of being in a 'Hole'. For them, there is no such thing as 'Technical Debt', no such thing as maintaining weak and wobbly legacy code, no such thing as bad designs, no such thing as under-skilled or poorly performing co-workers, no such thing as 'fighting fires', or no such thing as management that doesn't share the corporate vision. They just go to work and do their job, keep their head down, and do whatever is required. Mostly. Until the day they are swallowed by the...(read more)

    Read the article

  • What are your intentions with Java technology, Big Red?

    - by hinkmond
    Here's another article (this time from TechCentral) giving the roadmap of what's intended to be done with Java technology moving forward toward Java SE 8, 9, 10 and beyond. See: Oracle outlines Java Intentions Here's a quote: Under the subheading, "Works Everywhere and With Everything," Oracle lists goals like scaling down to embedded systems and up to massive servers, as well as support for heterogeneous compute models. If our group is going to get Java working "Everywhere and With Everything", we'd better get crackin'! We have to especially make more room in our lab, if we need to fit "Everything" in there to test... "Everything" takes up a lot of room! Hinkmond

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >