Search Results

Search found 1062 results on 43 pages for 'jonathan m davis'.

Page 5/43 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • Oracle Linux Partner Pavilion Spotlight - Part IV

    - by Ted Davis
    Welcome to the final Oracle Linux Partner Pavilion Spotlight Part IV.  Two days left till the Big Show. You are gearing up. We are gearing up. You can feel the excitement.  We can feel the excitement. This. Will. Be. The. Best. Show. EVER. See you at the Partner Pavilion (Moscone south # 1033) at Oracle OpenWorld. - Oracle Linux / Oracle VM Team HP and Oracle are pleased to announce another Oracle Validated Configuration based on the ProLiant DL980 server. Many choose to deploy Oracle workloads on the ProLiant DL980 based on the cost/performance ratio they achieve running Oracle Linux Unbreakable Enterprise Kernel. You can be confident that Oracle Validated Configurations based on ProLiant servers will help you achieve your most demanding performance goals. QLogic The QLogic-Oracle partnership spans over 20 years resulting in the most comprehensive line of Oracle Linux I/O adapter technology. Interface options include Ethernet, Fibre-Channel, and FCoE. Host side connectivity is offered in both low profile PCIe and Express Module PCIe form factors. QLogic software drives are jointly qualified and “in-box” with Oracle Linux 5.x, 6,x and Oracle VM enabling simplified installation and management while simultaneously taking risk out of the solution. Bringing innovations such as NPIV, T10-PI, and intelligent caching adapter technology to the Oracle Linux environment further strengthens the QLogic advantage. A big thank you to all of our Oracle Linux Partner Pavilion participants. We - they- look forward to meeting you next week at Oracle OpenWorld. If you've missed our three previous Partner Spotlight's - here are the links: Part I, Part II, Part III. 

    Read the article

  • You are or will be a laid off programmer - what do you do a year ago, right now, tomorrow, and next week?

    - by Adam Davis
    Many programmers, software engineers, and other technology professionals are out of work, facing layoffs, or are unprepared for layoffs though they feel secure right now. What should every programmer do right now (even if secure in their current job) to prepare them for layoffs down the road? If your boss came to your cubicle while you read this and laid you off: What would you do immediately after? What would you do tomorrow? What would you do next week? It obvious that one should always have an up to date resume, always get recommendations from people when they see you at your best (not when you're looking for a new job), etc. What are the things, step by step, that every programmer should do (or should consider doing) long before they are laid off, when they're laid off, and shortly after being laid off? This is a question with many possible facets. While I want to encourage discussion to center around programming career based answers, please reconsider before downvoting someone because they're thinking in terms of how they're going to prevent going into debt. Bonus catch-22 type question: You can study a new language or technology while out of work, but most places want you to have more than 1-2 months experience in a working environment, not just from a learning exercise. Is it worthwhile to place a priority on new (ideally in demand) skills, or should you instead hone existing skills?

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • Data Model Dissonance

    - by Tony Davis
    So often at the start of the development of database applications, there is a premature rush to the keyboard. Unless, before we get there, we’ve mapped out and agreed the three data models, the Conceptual, the Logical and the Physical, then the inevitable refactoring will dog development work. It pays to get the data models sorted out up-front, however ‘agile’ you profess to be. The hardest model to get right, the most misunderstood, and the one most neglected by the various modeling tools, is the conceptual data model, and yet it is critical to all that follows. The conceptual model distils what the business understands about itself, and the way it operates. It represents the business rules that govern the required data, its constraints and its properties. The conceptual model uses the terminology of the business and defines the most important entities and their inter-relationships. Don’t assume that the organization’s understanding of these business rules is consistent or accurate. Too often, one department has a subtly different understanding of what an entity means and what it stores, from another. If our conceptual data model fails to resolve such inconsistencies, it will reduce data quality. If we don’t collect and measure the raw data in a consistent way across the whole business, how can we hope to perform meaningful aggregation? The conceptual data model has more to do with business than technology, and as such, developers often regard it as a worthy but rather arcane ceremony like saluting the flag or only eating fish on Friday. However, the consequences of getting it wrong have a direct and painful impact on many aspects of the project. If you adopt a silo-based (a.k.a. Domain driven) approach to development), you are still likely to suffer by starting with an incomplete knowledge of the domain. Even when you have surmounted these problems so that the data entities accurately reflect the business domain that the application represents, there are likely to be dire consequences from abandoning the goal of a shared, enterprise-wide understanding of the business. In reading this, you may recall experiences of the consequence of getting the conceptual data model wrong. I believe that Phil Factor, for example, witnessed the abandonment of a multi-million dollar banking project due to an inadequate conceptual analysis of how the bank defined a ‘customer’. We’d love to hear of any examples you know of development projects poleaxed by errors in the conceptual data model. Cheers, Tony

    Read the article

  • Why does 12.04 try but fail to hibernate, even after I enabled hibernation?

    - by Roger Davis
    Regarding the below (-----) info from another post, my system will try (as is said below) to hibernate, but it won't get all the way there. Hard drive activity stops, but it does not shut down. If I turn the power off, then back on, it will start, but I have to "restore previous session" in the browser and other open apps don't restart, with the accompanying hassle. So because it tries, will the suggested fix then cause it to work correctly, or is the literal "try" not really what he means?!? PLEASE NOTE - this is a desktop system, not a laptop. Before enabling hibernation, please try to test whether it works correctly by running pm-hibernate in a terminal. The system will try to hibernate. If you are able to start the system again then you are more or less safe to add an override. To do so, start editing sudo nano /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla Fill it with this [Re-enable hibernate by default] Identity=unix-user:* Action=org.freedesktop.upower.hibernate ResultActive=yes Save by pressing Ctrl-O and exit nano by pressing Ctrl-X Restart and hibernation is back!

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Oracle Linux Partner Pavilion Spotlight - Part II

    - by Ted Davis
    As we draw closer to the first day of Oracle OpenWorld, starting in less than a week, we continue to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033). We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. In today's post we highlight three additional Oracle Linux / Oracle VM Partners from the pavilion. Micro Focus delivers mainframe solutions software and software delivery tools with its Borland products. These tools are grouped under the following solutions: Analysis and testing tools for JDeveloper Micro Focus Enterprise Analyzer is key to the success of application overhaul and modernization strategies by ensuring that they are based on a solid knowledge foundation. It reveals the reality of enterprise application portfolios and the detailed constructs of business applications. COBOL for Oracle Database, Oracle Linux, and Tuxedo Micro Focus Visual COBOL delivers the next generation of COBOL development and deployment. Itbrings the productivity of the Eclipse IDE to COBOL, and provides the ability to deploy key business critical COBOL applications to Oracle Linux both natively and under a JVM. Migration and Modernization tooling for mainframes Enterprise application knowledge, development, test and workload re-hosting tools significantly improves the efficiency of business application delivery, enabling CIOs and IT leaders to modernize application portfolios and target platforms such as Oracle Linux. When it comes to Oracle Linux database environments, supporting high transaction rates with minimal response times is no longer just a goal. It’s a strategic imperative. The “data deluge” is impacting the ability of databases and other strategic applications to access data and provide real-time analytics and reporting. As such, customer demand for accelerated application performance is increasing. Visit LSI at the Oracle Linux Pavilion, #733, to find out how LSI Nytro Application Acceleration products are designed from the ground up for database acceleration. Our intelligent solid-state storage solutions help to eliminate I/O bottlenecks, increase throughput and enable Oracle customers achieve the highest levels of DB performance. Accelerate Your Exadata Success With Teleran. Teleran’s software solutions for Oracle Exadata and Oracle Database reduce the cost, time and effort of migrating and consolidating applications on Exadata. In addition Teleran delivers visibility and control solutions for BI/data warehouse performance and user management that ensure service levels and cost efficiency.Teleran will demonstrate these solutions at the Oracle Open World Linux Pavilion: Consolidation Accelerator - Reduces the cost, time and risk ofof migrating and consolidation applications on Exadata. Application Readiness – Identifies legacy application performance enhancements needed to take advantage of Exadata performance features Workload Accelerator – Identifies and clusters workloads for faster performance on Exadata Application Visibility and Control - Improves performance, user productivity, and alignment to business objectives while reducing support and resource costs. Thanks for reading today's Partner Spotlight. Three more partners will be highlighted tomorrow. If you missed our first Partner Spotlight check it out here.

    Read the article

  • Consuming too much power on a Dell 15R

    - by Daniel Davis
    I'm new to Ubuntu, and I've just installed Ubuntu 11.10, and I must say I really like it a lot. I'm getting used to Ubuntu faster than I thought but the only issue I'm having is the power consumption. I have a " dell 15R with BIOS a07" and I uninstalled Ubuntu 11.04 because of some glitches I hated. None of those appear on this Ubuntu but now my computer discharges way too quick. when I check the power icon, it states that I have like 3:45 min remaining, which is fairly the same as what win7 and Ubuntu 11.04 used to tell me but 15 min later I check again and it gives me only 1:15 min remaining. Also, my computer seems to get a lot hotter than it used to. Is there more options to control the power usage on my laptop?

    Read the article

  • Instantiating objects in Java

    - by Davis Naglis
    I'm learning now Java from scratch and when I started to learn about instantiating objects, I don't understand - in which cases do I need to instantiate objects? For example I'm studying from TutsPlus course about it and there is example about "Rectangle" class. Instructor says that it needs to be instantiated. So I started to doubt about - when do I need to instantiate those objects when writing Java code?

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Oracle Linux Partner Pavilion Spotlight III

    - by Ted Davis
    Three days until Oracle OpenWorld 2012 begins. The anticipation and excitement are building. In today's spotlight we are presenting an additional three partners exhibiting in the Oracle Linux Partner Pavilion at Oracle OpenWorld ( Booth #1033). Fujitsu will showcase a Gold tower system representing the one-millionth PRIMERGY server shipped, highlighting Fujitsu’s position as the #4 server vendor worldwide. Fujitsu’s broad range of server platforms is reshaping the data center with virtualization and cloud services, including those based on Oracle Linux and Oracle VM. BeyondTrust, the leader in providing context aware security intelligence, will be showcasing its threat management and policy enablement solutions for addressing IT security risks and simplifying compliance. BeyondTrust will discuss how to reduce security risks, close security gaps and improve visibility across your server and database infrastructure. Please stop by to see live demonstrations of BeyondTrust’s award winning vulnerability management and privilege identity management solutions supported on Oracle Linux. Virtualized infrastructure with Oracle VM and NetApp storage and data management solutions provides an integrated and seamless end user experience. Designed for maximum efficiency to allow for native NetApp deduplication and backup/recovery/cloning of VM’s or templates. Whether you are provisioning one or multiple server pools or dynamically re-provisioning storage for your virtual machines to meet business demands, with Oracle and NetApp, you have one single point-and-click console to rapidly and easily deploy a virtualized agile data infrastructure in minutes. So there you have it!  The third install of our Partner Spolight. Check out Part I and Part II of our Partner Spotlights from previous days if you've missed them. Remember to visit the Oracle Linux team at Oracle OpenWorld.

    Read the article

  • jquery is not working over local network [migrated]

    - by Kortyell Davis
    i have a fedora server running apache web server. the server is connected to a home network. i have a laptop connected to the same network. i can enter the ip address of my server into the browser of my laptop and pull up the index.html file located in the document root directory of the fedora home server. the index.html file contains jquery code. the jquery code only works when i open it locally in my browser (e.g. right click open with firefox), but when i attempt to view the webpage from my laptop the jquery code is not executed. the code is here below. <script type="text/javascript" src="jquery-1.8.2.js"></script> ' $(document).ready(function() { $('#form').hide(); $('input[type=text]').focus(function() { $(this).val(''); }); $('input[type=password]').focus(function() { $(this).val(''); }); $('.form').hide(); $('#log').click(function(){ $('#form').toggle(); }); $('#reg').click(function(){ $('.form').toggle(); }); });

    Read the article

  • how to convert bitmap image to CIELab image?

    - by Neal Davis
    Hello to stackoverflow members! This is my first post but I love reading the questions and answers on this forum! I'm using VS2008, CSharp, NET 3.5 and Windows XP Pro latest patches on this project. I am reading in a jpeg file to a bitmap image using CSharp in the following way: Bitmap bmp = new Bitmap("background.jpg"); Graphics g = Graphics.FromImage(bmp); I then write some text and logos on g using various DrawString and DrawImage commands. I then want to write this bitmap to a tiff file using the command: bmp.Save("final_artwork.tif", ImageFormat.Tiff); I don't know what the TIFF tags in the header of the final resultant TIFF file will be when you do bmp.Save as shown above? But what I want to produce is a TIFF image file with PHOTOMETRIC CIELab, BITSPERSAMPLE 8, SAMPLESPERPIXEL 3, FILLORDER MSB2LSB, ORIENTATION TOPLEFT, PLANARCONFIG CONTIG, and also uncompressed and singlestrip. Im fairly certain, at least I have not found anything about Microsoft API and CIELab, I can not convert the bitmap image to CIELab photometric and the other requirements using anything from the Microsoft API or .NET 3.5 environment - someone please correct if I am wrong. I have used libtiff in a project in the past but just to write a tiff file where the tiff image in memory was already in CIELab format and all the other requirements as noted per above. Im not certain libtiff can convert from my CSharp bitmap image, which is probably RGB photometric, to CIELab, 8 bps, and 3 spp? Maybe I can use OpenCVImage or Emgu CV or something similiar? So the main question is what is going to be the easiest way to get my final bitmap image as outlined above into a CIELab TIFF and also meeting the other requirements per above? Thanks for any suggestions and code fragments you can provide? Neal Davis

    Read the article

  • SORT empties my file?

    - by Jonathan Sampson
    I'm attempting to sort a csv on my machine, but I seem to be erasing the contents each time I use the sort command. I've basically created a copy of my csv lacking the first row: sed '1d' original.csv > newcopy.csv To confirm that my new copy exists lacking the first row I can check with head: head 1 newcopy.csv Sure enough, it finds my file and shows me the original second now (now first row). My csv consists of numerous values seperated by commas: Jonathan Sampson,,,,[email protected],,,GA,United States,, Jane Doe,Mrs,,,[email protected],,,FL,United States,32501, As indicated above, some fields are empty. I want to sort based upon the email address field, which is either 4, or 5 - depending on whether the sort command uses a zero-based index. So I'm trying the following: sort -t, +4 -5 newcopy.csv > newcopy.csv So I'm using -t, to indicate that my fields are terminated by the comma, rather than a space. I'm not sure if +4 -5 actually sorts on the email field or not - I could use some help here. And then newcopy.csv > newcopy.csv to overwrite the original file with new sort results. After I do this, if I try to read in the first line: head 1 newcopy.csv I get the following error: head: cannot open `1' for reading: No such file or directory == newcopy.csv <== Sure enough, if I check my directory the file is now empty, and 0 bytes.

    Read the article

  • Upgrading openSUSE 11.1 with Plesk Panel 9.3 to PHP 5.3

    - by Jonathan Hogervorst
    Hello, I'm running a VPS with openSUSE 11.1 (i586). On the VPS is Parallels Plesk Panel 9.3.0 installed. The current PHP-version is PHP 5.2.11. I want to upgrade PHP to PHP 5.3, but I can't find good instructions on how to do this. If I check for updates in Zypper, it says this is the latest release. In the Plesk Updates isn't an update either, both via the webbased interface and the command line interface. On Software.openSUSE.org I can find packages for PHP 5.3.1 in both the server:php/server_apache_openSUSE_11.1-repo and the server:php/openSUSE_11.1-repo (can't post the link because I'm a newbie here). But if I add one of those to Zypper, I still don't see an update. Is there here somebody who knows how to do this? And is it completely safe to update that way? I don't want to end up with a broken VPS... Thanks! Jonathan

    Read the article

  • download management

    - by Jonathan
    I download many files, usually 2 or 3 a day, often 10ish. Some of them are duplicates because I just can't be bothered to find the original in my downloads folder. I have previously tried DAP and used that to create a new subfolder for each day's download. yet I have found this insufficient as sometimes I wish to find files by name/file type or I have multiple parts of downloads over more than one day. Another problem I have found is zips/rars/etc after downloading them and extracting them I then have the zip and the folder. I like it like on a Mac where it automatically extracts the zip after it has been downloaded and removes the zip. What I'd like to be able to do is sort the downloads by date, but dynamically so they are just in the big downloads folder, but I can just press a button and it will show me all the files from a particular site, or from a particular day or by a certain file type. Is there any software that will do this? I use Chrome as a browser but also have Firefox and like that. Jonathan

    Read the article

  • Restrict SSH user to connection from one machine

    - by Jonathan
    During set-up of a home server (running Kubuntu 10.04), I created an admin user for performing administrative tasks that may require an unmounted home. This user has a home directory on the root partition of the box. The machine has an internet-facing SSH server, and I have restricted the set of users that can connect via SSH, but I would like to restrict it further by making admin only accessible from my laptop (or perhaps only from the local 192.168.1.0/24 range). I currently have only an AllowGroups ssh-users with myself and admin as members of the ssh-users group. What I want is something that works like you may expect this setup to work (but it doesn't): $ groups jonathan ... ssh-users $ groups admin ... ssh-restricted-users $ cat /etc/ssh/sshd_config ... AllowGroups ssh-users [email protected].* ... Is there a way to do this? I have also tried this, but it did not work (admin could still log in remotely): AllowUsers [email protected].* * AllowGroups ssh-users with admin a member of ssh-users. I would also be fine with only allowing admin to log in with a key, and disallowing password logins, but I could find no general setting for sshd; there is a setting that requires root logins to use a key, but not for general users.

    Read the article

  • Restrict SSH user to connection from one machine

    - by Jonathan
    During set-up of a home server (running Kubuntu 10.04), I created an admin user for performing administrative tasks that may require an unmounted home. This user has a home directory on the root partition of the box. The machine has an internet-facing SSH server, and I have restricted the set of users that can connect via SSH, but I would like to restrict it further by making admin only accessible from my laptop (or perhaps only from the local 192.168.1.0/24 range). I currently have only an AllowGroups ssh-users with myself and admin as members of the ssh-users group. What I want is something that works like you may expect this setup to work (but it doesn't): $ groups jonathan ... ssh-users $ groups admin ... ssh-restricted-users $ cat /etc/ssh/sshd_config ... AllowGroups ssh-users [email protected].* ... Is there a way to do this? I have also tried this, but it did not work (admin could still log in remotely): AllowUsers [email protected].* * AllowGroups ssh-users with admin a member of ssh-users. I would also be fine with only allowing admin to log in with a key, and disallowing password logins, but I could find no general setting for sshd; there is a setting that requires root logins to use a key, but not for general users.

    Read the article

  • Problems with createImage(int width, int height)

    - by Jonathan
    I have the following code, which is run every 10ms as part of a game: private void gameRender() { if(dbImage == null) { //createImage() returns null if GraphicsEnvironment.isHeadless() //returns true. (java.awt.GraphicsEnvironment) dbImage = createImage(PWIDTH, PHEIGHT); if(dbImage == null) { System.out.println("dbImage is null"); //Error recieved return; } else dbg = dbImage.getGraphics(); } //clear the background dbg.setColor(Color.white); dbg.fillRect(0, 0, PWIDTH, PHEIGHT); //draw game elements... if(gameOver) { gameOverMessage(dbg); } } The problem is that it enters the if statement which checks for the Image being null, even after I attempt to define the image. I looked around, and it seems that createImage() will return null if GraphicsEnvironment.isHeadless() returns true. I don't understand exactly what the isHeadless() method's purpose is, but I thought it might have something to do with the compiler or IDE, so I tried on two, both of which get the same error (Eclipse, and BlueJ). Anyone have any idea what the source of the error is, and how I might fix it? Thanks in advance Jonathan ................................................................... EDIT: I am using java.awt.Component.createImage(int width, int height). The purpose of this method is to ensure the creation of, and edit an Image that will contain the view of the player of the game, that will later be drawn to the screen by means of a JPanel. Here is some more code if this helps at all: public class Sim2D extends JPanel implements Runnable { private static final int PWIDTH = 500; private static final int PHEIGHT = 400; private volatile boolean running = true; private volatile boolean gameOver = false; private Thread animator; //gameRender() private Graphics dbg; private Image dbImage = null; public Sim2D() { setBackground(Color.white); setPreferredSize(new Dimension(PWIDTH, PHEIGHT)); setFocusable(true); requestFocus(); //Sim2D now recieves key events readyForTermination(); addMouseListener( new MouseAdapter() { public void mousePressed(MouseEvent e) { testPress(e.getX(), e.getY()); } }); } //end of constructor private void testPress(int x, int y) { if(!gameOver) { gameOver = true; //end game at mousepress } } //end of testPress() private void readyForTermination() { addKeyListener( new KeyAdapter() { public void keyPressed(KeyEvent e) { int keyCode = e.getKeyCode(); if((keyCode == KeyEvent.VK_ESCAPE) || (keyCode == KeyEvent.VK_Q) || (keyCode == KeyEvent.VK_END) || ((keyCode == KeyEvent.VK_C) && e.isControlDown()) ) { running = false; //end process on above list of keypresses } } }); } //end of readyForTermination() public void addNotify() { super.addNotify(); //creates the peer startGame(); //start the thread } //end of addNotify() public void startGame() { if(animator == null || !running) { animator = new Thread(this); animator.start(); } } //end of startGame() //run method for world public void run() { while(running) { long beforeTime, timeDiff, sleepTime; beforeTime = System.nanoTime(); gameUpdate(); //updates objects in game (step event in game) gameRender(); //renders image paintScreen(); //paints rendered image to screen timeDiff = (System.nanoTime() - beforeTime) / 1000000; sleepTime = 10 - timeDiff; if(sleepTime <= 0) //if took longer than 10ms { sleepTime = 5; //sleep a bit anyways } try{ Thread.sleep(sleepTime); //sleep by allotted time (attempts to keep this loop to about 10ms) } catch(InterruptedException ex){} beforeTime = System.nanoTime(); } System.exit(0); } //end of run() private void gameRender() { if(dbImage == null) { dbImage = createImage(PWIDTH, PHEIGHT); if(dbImage == null) { System.out.println("dbImage is null"); return; } else dbg = dbImage.getGraphics(); } //clear the background dbg.setColor(Color.white); dbg.fillRect(0, 0, PWIDTH, PHEIGHT); //draw game elements... if(gameOver) { gameOverMessage(dbg); } } //end of gameRender() } //end of class Sim2D Hope this helps clear things up a bit, Jonathan

    Read the article

  • It’s time that you ought to know what you don’t know

    - by fatherjack
    There is a famous quote about unknown unknowns and known knowns and so on but I’ll let you review that if you are interested. What I am worried about is that there are things going on in your environment that you ought to know about, indeed you have asked to be told about but you are not getting the information. When you schedule a SQL Agent job you can set it to send an email to an inbox monitored by someone who needs to know and indeed can do something about it. However, what happens if the email process isnt successful? Check your servers with this: USE [msdb] GO /* This code selects the top 10 most recent SQLAgent jobs that failed to complete successfully and where the email notification failed too. Jonathan Allen Jul 2012 */ DECLARE @Date DATETIME SELECT @Date = DATEADD(d, DATEDIFF(d, '19000101', GETDATE()) - 1, '19000101') SELECT TOP 10 [s].[name] , [sjh].[step_name] , [sjh].[sql_message_id] , [sjh].[sql_severity] , [sjh].[message] , [sjh].[run_date] , [sjh].[run_time] , [sjh].[run_duration] , [sjh].[operator_id_emailed] , [sjh].[operator_id_netsent] , [sjh].[operator_id_paged] , [sjh].[retries_attempted] FROM [dbo].[sysjobhistory] AS sjh INNER JOIN [dbo].[sysjobs] AS s ON [sjh].[job_id] = [s].[job_id] WHERE EXISTS ( SELECT * FROM [dbo].[sysjobs] AS s INNER JOIN [dbo].[sysjobhistory] AS s2 ON [s].[job_id] = [s2].[job_id] WHERE [sjh].[job_id] = [s2].[job_id] AND [s2].[message] LIKE '%failed to notify%' AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) >= @date AND [s2].[run_status] = 0 ) AND sjh.[run_status] = 0 AND sjh.[step_id] != 0 AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [run_date])) >= @date ORDER BY [sjh].[run_date] DESC , [sjh].[run_time] DESC go USE [msdb] go /* This code summarises details of SQLAgent jobs that failed to complete successfully and where the email notification failed too. Jonathan Allen Jul 2012 */ DECLARE @Date DATETIME SELECT @Date = DATEADD(d, DATEDIFF(d, '19000101', GETDATE()) - 1, '19000101') SELECT [s].name , [s2].[step_id] , CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) AS [rundate] , COUNT(*) AS [execution count] FROM [dbo].[sysjobs] AS s INNER JOIN [dbo].[sysjobhistory] AS s2 ON [s].[job_id] = [s2].[job_id] WHERE [s2].[message] LIKE '%failed to notify%' AND CONVERT(DATETIME, CONVERT(VARCHAR(15), [s2].[run_date])) >= @date AND [s2].[run_status] = 0 GROUP BY name , [s2].[step_id] , [s2].[run_date] ORDER BY [s2].[run_dateDESC] These two result sets will show if there are any SQL Agent jobs that have run on your servers that failed and failed to successfully email about the failure. I hope it’s of use to you. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Silverlight Cream for January 14, 2011 -- #1027

    - by Dave Campbell
    In this Issue: Sigurd Snørteland, Yochay Kiriaty, WindowsPhoneGeek(-2-), Jesse Liberty(-2-), Kunal Chowdhury, Martin Krüger(-2-), Jonathan Cardy. Above the Fold: Silverlight: "Image Viewer using a GridSplitter control" Martin Krüger WP7: "Implementing WP7 ToggleImageControl from the ground up: Part1" WindowsPhoneGeek VS2010 Templates: "MVVM Project Templates for Visual Studio 2010" Jonathan Cardy From SilverlightCream.com: BabySmash7 - a WP7 children's game (source code included) Sigurd Snørteland not only brings Scott Hanselman's Baby Smash to WP7, but he's delivering the source to us as well as discussion of the app. Windows Push Notification Server Side Helper Library Yochay Kiriaty has a tutorial up on Push Notification... not explaining them, but a discussion of a WP7 Push Recipe that provides an easy way for sending all 3 kinds of push notification messages currently supported. Implementing WP7 ToggleImageControl from the ground up: Part1 WindowsPhoneGeek has a great 2-part series up on building a useful WP7 custom control -- a ToggleImage control... this part 1 is definition, deciding on Visual states, etc... buckle up... this is good stuff Implementing WP7 ToggleImageControl from the ground up: Part2 Part 2 in WindowsPhoneGeek's series is also up and where the real fun lives -- implementing the behavior of the control... and the source is available at the end of this post. The Full Stack #5 – Entity Framework Code First Jesse Liberty has episode 5 of the "Full Stack" series he and Jon Galloway are doing and are discussing Entity Framework Code First. Windows Phone From Scratch #18 – MVVM Light Toolkit Soup To Nuts 3 Jesse Liberty also has part 3 of his MVVMLight and WP7 post up and is digging into messaging in this one... for example view <--> ViewModel communication. Exploring Ribbon Control for Silverlight (Part - 1) Kunal Chowdhury has part 1 of a series up on using the Silverlight Ribbon Control from DevComponents... lots of information and a great intro to a great control. Image Viewer using a GridSplitter control Martin Krüger has a very nice picture viewer up as a demo and code available that simply uses the GridSplitter to implement tha aperture... check it out. How to: Gentle animation of a magnify effect Martin Krüger's latest is a take-off on a prior post he links to called 'just for fun' in which he smoothly animates a magnify effect... just very cool animation... explanation and source. MVVM Project Templates for Visual Studio 2010 Jonathan Cardy has a couple resources you probably wanna grab... two MVVM project templates for VS2010... one WPF and one Silverlight Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for March 05, 2011 -- #1053

    - by Dave Campbell
    In this all-sumbittal (while I was at MVP11) Issue: Michael Washington(-2-), goldytech, JFo, Andrea Boschin, Jonathan Marbutt, Gregor Biswanger, Michael Wolf, and Peter Kuhn. Above the Fold: Silverlight: "A Simple Bindable CheckboxList Control" Jonathan Marbutt WP7: "Struggles with the Panorama Control" JFo Lightswitch: "HTML (including HTML 5) and LightSwitch at the same time?" Michael Washington From SilverlightCream.com: LightSwitch vs HTML 5 ? In his first post-MVP11 post, Michael Washington takes on HTML5 with a Lightswitch discussion. Good discussion follows in the comments also. HTML (including HTML 5) and LightSwitch at the same time? Michael Washington's 2nd post is a great tutorial on creating a re-usable business layer with Lightswitch... all good stuff, and look for more from Michael as Lightswitch matures. How to add Computed Properties in WCF Ria Services on client goldytech has a new post up about providing real-time solutions to client-side calculations with WCF RIA services. Struggles with the Panorama Control JFo details a problem he had with the Panorama control on WP7... detailing 4 problems she had and her solutions... well thought-out explanations too.. a definite good read... and another blogger to add to my list! Windows Phone 7 - Part #7: Understanding Push Notifications Andrea Boschin has part 7 of his WP7 series up at SilverlightShow, concentrating on Push Notifications this time out... great explanation of push notifications in this tutorial from the service and phone side with a working sample to boot. A Simple Bindable CheckboxList Control Jonathan Marbutt took a completely different direction than most and created his own Bindable CheckboxList by starting with ContentControl rather than a Listbox as most do... pretty cool and all the source. Own routed events in Silverlight I met Gregor Biswanger at the MVP Summit and asked him to send me his blog run through Microsoft Translator ... here's a great post on routed events he did back in November... and a discussion of his CallMethodAction Behavior... which looks like another good post subject! Creating a Silverlight Out-of-Browser Splash Screen Michael Wolf has a post up discussing OOB splash screens... I like his "White screen of Awesome" definition ... I'm very familiar with that :) ... check out his solution for getting around that white screen, and lots of external links too. XNA for Silverlight developers: Part 5 - Input (touch + gestures) Peter Kuhn has Part 5 in his tutorial series on XNA for Silverlight devs up at SilverlightShow... this time covering touch and gestures ... how to enable and read gestures, and the difference between Silverlight and XNA in the touch department. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >