Search Results

Search found 8255 results on 331 pages for 'general guts'.

Page 51/331 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • How to prevent delays associated with IPv6 AAAA records?

    - by Nic
    Our Windows servers are registering IPv6 AAAA records with our Windows DNS servers. However, we don't have IPv6 routing enabled on our network, so this frequently causes stall behaviours. Microsoft RDP is the worst offender. When connecting to a server that has a AAAA record in DNS, the remote desktop client will try IPv6 first, and won't fall back to IPv4 until the connection times out. Power users can work around this by connecting to the IP address directly. Resolving the IPv4 address with ping -4 hostname.foo always works instantly. What can I do to avoid this delay? Disable IPv6 on client? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Too many clients to ensure this is set everywhere consistently. Will cause more problems later when we finally implement IPv6. Disable IPv6 on the server? Nope, Microsoft says IPv6 is a mandatory part of the Windows operating system. Requires an inconvenient registry hack to disable the entire IPv6 stack. Ensuring this is correctly set on all servers is inconvenient. Will cause more problems later when we finally implement IPv6. Mask IPv6 records on the user-facnig DNS recursor? Nope, we're using NLNet Unbound and it doesn't support that. Prevent registration of IPv6 AAAA records on the Microsoft DNS server? I don't think that's even possible. At this point, I'm considering writing a script that purges all AAAA records from our DNS zones. Please, help me find a better way. UPDATE: DNS resolution is not the problem. As @joeqwerty points out in his answer, the DNS records are returned instantly. Both A and AAAA records are immediately available. The problem is that some clients (mstsc.exe) will preferentially attempt a connection over IPv6, and take a while to fall back to IPv4. This seems like a routing problem. The ping command produces a "General failure" error message because the destination address is unroutable. C:\Windows\system32>ping myhost.mydomain Pinging myhost.mydomain [2002:1234:1234::1234:1234] with 32 bytes of data: General failure. General failure. General failure. General failure. Ping statistics for 2002:1234:1234::1234:1234: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), I can't get a packet capture of this behaviour. Running this (failing) ping command does not produce any packets in Microsoft Network Monitor. Similarly, attempting a connection with mstsc.exe to a host with an AAAA record produces no traffic until it does a fallback to IPv4. UPDATE: Our hosts are all using publicly-routable IPv4 addresses. I think this problem might come down to a broken 6to4 configuration. 6to4 behaves differently on hosts with public IP addresses vs RFC1918 addresses. UPDATE: There is definitely something fishy with 6to4 on my network. When I disable 6to4 on the Windows client, connections resolve instantly. netsh int ipv6 6to4 set state disabled But as @joeqwerty says, this only masks the problem. I'm still trying to find out why IPv6 communication on our network is completely non-working.

    Read the article

  • Surviving MATLAB and R as a Hardcore Programmer

    - by dsimcha
    I love programming in languages that seem geared towards hardcore programmers. (My favorites are Python and D.) MATLAB is geared towards engineers and R is geared towards statisticians, and it seems like these languages were designed by people who aren't hardcore programmers and don't think like hardcore programmers. I always find them somewhat awkward to use, and to some extent I can't put my finger on why. Here are some issues I have managed to identify: (Both): The extreme emphasis on vectors and matrices to the extent that there are no true primitives. (Both): The difficulty of basic string manipulation. (Both): Lack of or awkwardness in support for basic data structures like hash tables and "real", i.e. type-parametric and nestable, arrays. (Both): They're really, really slow even by interpreted language standards, unless you bend over backwards to vectorize your code. (Both): They seem to not be designed to interact with the outside world. For example, both are fairly bulky programs that take a while to launch and seem to not be designed to make simple text filter programs easy to write. Furthermore, the lack of good string processing makes file I/O in anything but very standard forms near impossible. (Both): Object orientation seems to have a very bolted-on feel. Yes, you can do it, but it doesn't feel much more idiomatic than OO in C. (Both): No obvious, simple way to get a reference type. No pointers or class references. For example, I have no idea how you roll your own linked list in either of these languages. (MATLAB): You can't put multiple top level functions in a single file, encouraging very long functions and cut-and-paste coding. (MATLAB): Integers apparently don't exist as a first class type. (R): The basic builtin data structures seem way too high level and poorly documented, and never seem to do quite what I expect given my experience with similar but lower level data structures. (R): The documentation is spread all over the place and virtually impossible to browse or search. Even D, which is often knocked for bad documentation and is still fairly alpha-ish, is substantially better as far as I can tell. (R): At least as far as I'm aware, there's no good IDE for it. Again, even D, a fairly alpha-ish language with a small community, does better. In general, I also feel like MATLAB and R could be easily replaced by plain old libraries in more general-purpose langauges, if sufficiently comprehensive libraries existed. This is especially true in newer general purpose languages that include lots of features for library writers. Why do R and MATLAB seem so weird to me? Are there any other major issues that you've noticed that may make these languages come off as strange to hardcore programmers? When their use is necessary, what are some good survival tips? Edit: I'm seeing one issue from some of the answers I've gotten. I have a strong personal preference, when I analyze data, to have one script that incorporates the whole pipeline. This implies that a general purpose language needs to be used. I hate having to write a script to "clean up" the data and spit it out, then another to read it back in a completely different environment, etc. I find the friction of using MATLAB/R for some of my work and a completely different language with a completely different address space and way of thinking for the rest to be a huge source of friction. Furthermore, I know there are glue layers that exist, but they always seem to be horribly complicated and a source of friction.

    Read the article

  • How would I handle input with a Game Component?

    - by Aufziehvogel
    I am currently having problems from finding my way into the component-oriented XNA design. I read an overview over the general design pattern and googled a lot of XNA examples. However, they seem to be right on the opposite site. In the general design pattern, an object (my current player) is passed to InputComponent::update(Player). This means the class will know what to do and how this will affect the game (e.g. move person vs. scroll text in a menu). Yet, in XNA GameComponent::update(GameTime) is called automatically without a reference to the current player. The only XNA examples I found built some sort of higher-level Keyboard engine into the game component like this: class InputComponent: GameComponent { public void keyReleased(Keys); public void keyPressed(Keys); public bool keyDown(Keys); public void override update(GameTime gameTime) { // compare previous state with current state and // determine if released, pressed, down or nothing } } Some others went a bit further making it possible to use a Service Locator by a design like this: interface IInputComponent { public void downwardsMovement(Keys); public void upwardsMovement(Keys); public bool pausedGame(Keys); // determine which keys pressed and what that means // can be done for different inputs in different implementations public void override update(GameTime); } Yet, then I am wondering if it is possible to design an input class to resolve all possible situations. Like in a menu a mouse click can mean "click that button", but in game play it can mean "shoot that weapon". So if I am using such a modular design with game components for input, how much logic is to be put into the InputComponent / KeyboardComponent / GamepadComponent and where is the rest handled? What I had in mind, when I heard about Game Components and Service Locator in XNA was something like this: use Game Components to run the InputHandler automatically in the loop use Service Locator to be able to switch input at runtime (i.e. let player choose if he wants to use a gamepad or a keyboard; or which shall be player 1 and which player 2). However, now I cannot see how this can be done. First code example does not seem flexible enough, as on a game pad you could require some combination of buttons for something that is possible on keyboard with only one button or with the mouse) The second code example seems really hard to implement, because the InputComponent has to know in which context we are currently. Moreover, you could imagine your application to be multi-layered and let the key-stroke go through all layers to the bottom-layer which requires a different behaviour than the InputComponent would have guessed from the top-layer. The general design pattern with passing the Player to update() does not have a representation in XNA and I also cannot see how and where to decide which class should be passed to update(). At most time of course the player, but sometimes there could be menu items you have to or can click I see that the question in general is already dealt with here, but probably from a more elobate point-of-view. At least, I am not smart enough in game development to understand it. I am searching for a rather code-based example directly for XNA. And the answer there leaves (a noob like) me still alone in how the object that should receive the detected event is chosen. Like if I have a key-up event, should it go to the text box or to the player?

    Read the article

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

  • ArgumentError: Error #2004: One of the parameters is invalid.

    - by Florian
    I got the following stack trace while running a piece of code in my flex application: ArgumentError: Error #2004: One of the parameters is invalid. at ObjectOutput/writeObject() at mx.collections::ArrayList/writeExternal()[C:\autobuild\galaga\frameworks\projects\framework\src\mx\collections\ArrayList.as:470] at mx.collections::ArrayCollection/writeExternal()[C:\autobuild\galaga\frameworks\projects\framework\src\mx\collections\ArrayCollection.as:144] at flash.utils::ByteArray/writeObject() at mx.utils::ObjectUtil$/copy()[C:\autobuild\galaga\frameworks\projects\framework\src\mx\utils\ObjectUtil.as:100] at components::SettingsHandler/saveOpenNodes()[C:\workspaces\Intranets\UniqueInbox\flex_src\components\SettingsHandler.as:153] at components::soapjira/getIssuesByFilters()[C:\workspaces\Intranets\UniqueInbox\flex_src\components\soapjira.as:295] at components.tabs::JiraAllActions/loadData()[C:\workspaces\Intranets\UniqueInbox\flex_src\components\tabs\JiraAllActions.mxml:193] at components::SettingsHandler/settingsClosed()[C:\workspaces\Intranets\UniqueInbox\flex_src\components\SettingsHandler.as:114] at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at mx.core::UIComponent/dispatchEvent()[C:\autobuild\galaga\frameworks\projects\framework\src\mx\core\UIComponent.as:9408] at components.general::JiraSettings/closeSettings()[C:\workspaces\Intranets\UniqueInbox\flex_src\components\general\JiraSettings.mxml:58] at components.general::JiraSettings/__save_click()[C:\workspaces\Intranets\UniqueInbox\flex_src\components\general\JiraSettings.mxml:107] That stack comes up when running the following line (SettingsHandler.as:153): var tmp:Object = parentComponent.dataGrid.dataProvider.openNodes; I am actually copying the open nodes of a datagrid's provider. Has been working until now and just started going wrong, no idea what I have changed that could interfere with this. On debug mode, I see that openNodes is accessible and contains the open nodes, as expected. Doing tmp:Object = parentComponent.dataGrid.dataProvider.openNodes works, but not with ObjectUtil. (parentComponent is the reference to the component which contains the DG).

    Read the article

  • About Data Objects and DAO Design when using Hibernate

    - by X. Ma
    I'm hesitating between two designs of a database project using Hibernate. Design #1. (1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it. (2) Create a database library with Hibernate. This library implements the data provider interface in (1). The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects. Design #2. Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate. I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider? Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2? What do you think about this? Thanks for your time!

    Read the article

  • How to configure database connection securely

    - by chiccodoro
    Similar but not the same: How to securely store database connection details Securely connecting to database within a application Hi all, I have a C# WinForms application connecting to a database server. The database connection string, including a generic user/pass, is placed in a NHibernate configuration file, which lies in the same directory as the exe file. Now I have this issue: The user that runs the application should not get to know the username/password of the general database user because I don't want him to rummage around in the database directly. Alternatively I could hardcode the connection string, which is bad because the administrator must be able to change it if the database is moved or if he wants to switch between dev/test/prod environments. So long I've found three possibilities: The first referenced question was generally answered by making the file only readable for the user that runs the application. But that's not not enough in my case (the user running the application is a person. The database user/pass are general and shouldn't even be accessible by the person.) The first answer additionally proposed to encrypt the connection data before writing it to the file. With this approach, the administrator is not able anymore to configure the connection string because he cannot encrypt it by hand. The second referenced question provides an approach for this very scenario but it seems very complicated. My questions to you: This is a very general issue, so isn't there any general "how-to-do-it" way, somehow a "design pattern"? Is there some support in .NET's config infrastructure? (optional, maybe out of scope) Can I combine that easily with the NHibernate configuration mechanism?

    Read the article

  • Best Practices - which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) One question that frequently comes up is "which types of domain should I use to run applications?" There used to be a simple answer in most cases: "only run applications in guest domains", but enhancements to T-series servers, Oracle VM Server for SPARC and the advent of SPARC SuperCluster have made this question more interesting and worth qualifying differently. This article reviews the relevant concepts and provides suggestions on where to deploy applications in a logical domains environment. Review: division of labor and types of domain Oracle VM Server for SPARC offloads many functions from the hypervisor to domains (also called virtual machines). This is a modern alternative to using a "thick" hypervisor that provides all virtualization functions, as in traditional VM designs, This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, which further improves reliability and security. In this architecture, management and I/O functionality are provided within domains. Oracle VM Server for SPARC does this by defining the following types of domain, each with their own roles: Control domain - management control point for the server, used to configure domains and manage resources. It is the first domain to boot on a power-up, is an I/O domain, and is usually a service domain as well. I/O domain - has been assigned physical I/O devices: a PCIe root complex, a PCI device, or a SR-IOV (single-root I/O Virtualization) function. It has native performance and functionality for the devices it owns, unmediated by any virtualization layer. Service domain - provides virtual network and disk devices to guest domains. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI busses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain, which is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure: guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device doesn't result in an application outage. This is also used for "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O busses, so there is more I/O capacity that can be used for applications. Increased T-series server capacity made it attractive to run more vertical applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the SPARC SuperCluster engineered system, announced a year ago at Oracle OpenWorld. In SPARC SuperCluster, I/O domains are used for high performance applications, with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is the introduction of Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. A domain with either a DIO or SR-IOV device is an I/O domain. In summary: not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O go guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm has to be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. I/O domains can be used for applications with high performance requirements. This is used to great effect in SPARC SuperCluster and in general T4 deployments. Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV) make this more attractive by giving direct I/O access to more domains. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect other domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so an interruption of service in the service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. SPARC SuperCluster use the control domain for applications, but it is an exception: it's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity T-series servers have made it more attractive to use them for applications with high resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide higher performance for critical applications.

    Read the article

  • Ask How-To Geek: Clone a Disk, Resize Static Windows, and Create System Function Shortcuts

    - by Jason Fitzpatrick
    This week we take a look at how to clone a hard disk for easy backup or duplication, resize stubbornly static windows, and create shortcuts for dozens of Windows functions. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 ShapeShifter: What Are Dreams? [Video] This Computer Runs on Geek Power Wallpaper Bones, Clocks, and Counters; A Look at the First 35,000 Years of Computing Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics

    Read the article

  • DIY Sunrise Alarm Clock Sports a Polished Build and Perfect Time Keeping

    - by Jason Fitzpatrick
    We’ve seen our fair share of sunrise simulators, but by far this one is the most polished build–everything from the atomic clock circuit to the LEDs are all packed cleanly in a frosted acrylic case. Renaud Schleck, the tinker behind the build, describes the guts: This is an alarm clock simulating the sunrise, which means that a few minutes before the alarm goes off, it shines some light whose brightness increases over time. The alarm clock is built around a MSP430G2553 microcontroller compatible with the MSP430 Launchpad from Texas Instrument. It features a DCF77 module which sets the clock automatically through radio waves. How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • The Best of CES (Consumer Electronics Show) in 2011

    - by Justin Garrison
    This year, How-To Geek’s own Justin was on-site at the Consumer Electronics Show in Las Vegas, where every gadget manufacturer shows off their latest creations, and he was able to sit down and get hands-on with most of them. Here’s the best of the bunch. Make sure to also check out our list of the Worst of CES 2011, where we covered the gadgets that just didn’t make the cut Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video]

    Read the article

  • Replica Myst Book Actually Plays all the Myst Games

    - by Jason Fitzpatrick
    Runaway 1990s gaming hit Myst features books that had the power to transport you to other worlds. One dedicated fan has gone so far as to make a book that, when opened, transports you to the Myst universe. From hand-crafting the book itself to populating the guts of the book with carefully selected (and frequently modified) parts, Mike Ando left no part of his project uncustomized. The end result is a stunning mod and tribute to the Myst franchise–a beautiful book you can open and play through all the games in the series. Check out the video above to see it in action then hit up the link below to check out Mike’s build album. Myst Book [via Hack A Day] What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • Apple II Teardown and Restoration Offers a Peek at Computing History [Video]

    - by Jason Fitzpatrick
    In this extended teardown video, we’re granted a peek at the guts of an Apple IIe and treated to quite a bit of Apple IIe history in the process. Todd Harrison, via his project blog ToddFun, shares videos of his Apple IIe restoration project. The videos are lengthy, but include close up examination of all the parts and lots of information about the history of the computer and its construction. You can check out the rest of his Apple II videos and posts at the link below. Apple II Plus from 1982 teardown, repair, cleanup and demonstration [via The Unofficial Apple Weblog] HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Electrified Light Saber Helps You Slay Bugs Like a Jedi [Video]

    - by Jason Fitzpatrick
    This fun little DIY project combines a toy light saber with the guts of an electrified fly-swatter to yield a bug slaying sword perfect for your epic battles against the Empire’s tiniest soldiers. Courtesy of Caleb over at Hack A Day, the build is surprisingly simple and quick to put together (if you’re handy with a screw driver and soldering iron). Check out the video above to see the build and the results or hit up the link below to read more about it. Building a Bug Zapping Light Saber [Hack A Day] How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Hack a Nintendo Zapper into a Real Life Laser Blaster

    - by Jason Fitzpatrick
    Why settle for zapping ducks on the screen when you could be popping balloons and lighting matches on fire? This awesome (but rather dangerous) hack turns an old Nintendo zapper into a legitimate laser gun. Courtesy of the tinkers over at North Street Labs, we’re treated to a Nintendo zapper overhaul that replaces the guts with a powerful 2W blue laser, a battery pack, and a keyed safety switch. Check out the video below to see the laser blaster in action: For more information on the build and a pile of more-than-merited safety warnings, hit up the link below. Nintendo Zapper 2W+ Laser [via Boing Boing] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • How to Setup Software RAID for a Simple File Server on Ubuntu

    - by Sysadmin Geek
    Do you need a file server on the cheap that is easy to setup, “rock solid” reliable with Email Alerting? will show you how to use Ubuntu, software RAID and SaMBa to accomplish just that Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video] Convert or View Documents Online Easily with Zoho, No Account Required

    Read the article

  • CLI program to download album art

    - by John Baber
    I'd like to be able to do this: $ pwd /home/$USER/music/ripped_music/Monty_Python-Instant_Record_Collection $ ls 01.The_Executive_Intro.mp3 ... 16.The_Lumberjack_Song.mp3 $ mystery_command_or_script . $ ls 01.The_Executive_Intro.mp3 ... 16.The_Lumberjack_Song.mp3 album_cover.jpg $ Somewhere in the guts of Rhythmbox, totem, etc. this is being done. I'd like to be able to do it myself. I don't need help actually writing a script. I'd really just like to know if there's something like CDDB for album covers. (Scraping albumart.org is the current working solution.)

    Read the article

  • Prevent APT from overwriting JCE jars

    - by Doc
    My server runs a Java application that requires I replace a few java library files with ones I downloaded on my own. This has to do with JCE security extensions and isn't really relevant to my question. I've found that these library files tend to get overwritten by apt when it later updates my java package. Is there a apt-friendly way of masking these specific files so apt won't touch them? Potential Solutions I'm considering just removing the write flag from the files, though I'm expecting this will cause apt to spew its guts everywhere when it later tries to overwrite them? Perhaps there's a java custom library directory I don't know of, where I can park my files and they'll be loaded instead of the package's defaults? The last-resort option I'm considering is writing a cron job to periodically replace the files with my versions. I hate this option.

    Read the article

  • Hack an Old Hardcover Book into a Reading Light

    - by Jason Fitzpatrick
    If you’re looking for a clever way to conceal a reading lamp on your bedside table, this hardcover-to-book-light conversion is just the ticket. For this project you’ll be hollowing out a hardcover book and replacing the guts with a wooden frame and a strip of cool-running and efficient LED lights. You’ll need some very basic wood working and soldering skills and an afternoon or two (mostly consumed, as the author notes, by waiting for glue to dry). Check out the video below to see the full build: Hit up the link below for a full parts list and additional building tips. How To: Not Your Ordinary Book Light [Grathio via Neatorama] How To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)Learn How to Make HDR Images in Photoshop or GIMP With a Simple Trick

    Read the article

  • Can it be useful to build an application starting with the GUI?

    - by Grant Palin
    The trend in application design and development seems to be starting with the "guts": the domain, then data access, then infrastructure, etc. The GUI seems to usually come later in the process. I wonder if it could ever be useful to build the GUI first... My rationale is that by building at least a prototype GUI, you gain a better idea of what needs to happen behind the scenes, and so are in a better position to start work on the domain and supporting code. I can see an issue with this practice in that if the supporting code is not yet written, there won't be much for the GUI layer to actually do. Perhaps building mock objects or throwaway classes (somewhat like is done in unit testing) would provide just enough of a foundation to build the GUI on initially. Might this be a feasible idea for a real project? Maybe we could add GDD (GUI Driven Development) to the acronym stable...

    Read the article

  • BizTalk: namespaces

    - by Leonid Ganeline
    BizTalk team did a good job hiding the .NET guts from developers. Developers are working with editors and hardly with .NET code. The Orchestration editor, the Mapper, the Schema editor, the Pipeline editor, all these editors hide what is going on with artifacts created and deployed. Working with the BizTalk artifacts year after year brings us some knowledge which could help to understand more about the .NET guts. I would like to highlight the .NET namespaces. What they are, how they influence our everyday tasks in the BizTalk application development. What is it? Most of the BizTalk artifacts are compiled into the NET classes. Not all of them… but I will show you later. Classes are placed inside the namespaces. I will not describe here why we need namespaces and what is it. I assume you all know about it more then me. Here I would like to emphasize that almost each BizTalk artifact is implemented as a .NET class within a .NET namespace. Where to see the namespaces in development? The namespaces are inconsistently spread across the artifact parameters. Let’s start with namespace placement in development. Then we go with namespaces in deployment and operations. I am using pictures from the BizTalk Server 2013 Beta and the Visual Studio 2012 but there was no changes regarding the namespaces starting from the BizTalk 2006. Default namespace When a new BizTalk project is created, the default namespace is set up the same as a name of a project. This namespace would be used for all new BizTalk artifacts added to this project. Orchestrations When we select a green or a red markers (the Begin and End orchestration shapes) we will see the orchestration Properties window. We also can click anywhere on the space between Port Surfaces to see this window.   Schemas The only way to see the NET namespace for map is selecting the schema file name into the Solution Explorer. Notes: We can also see the Type Name parameter. It is a name of the correspondent .NET class. We can also see the Fully Qualified Name parameter. We cannot see the schema namespace when selecting any node on the schema editor surface. Only selecting a schema file name gives us a namespace parameter. If we select a <Schema> node we can get the Target Namespace parameter of the schema. This is NOT the .NET namespace! It is an XML namespace. See this XML namespace inside the XML schema, it is shown as a special targetNamespace attribute Here this XML namespace appears inside the XML document itself. It is shown as a special xmlns attribute.   Maps It is similar to the schemas. The only way to see the NET namespace for map is selecting a map file name into the Solution Explorer. Pipelines It is similar to the schemas. The only way to see the NET namespace for pipeline is selecting a pipeline file name into the Solution Explorer. z Ports, Policies and Tracking Profiles The Send and Receive Ports, the Policies and the BAM Tracking Profiles do not create the .NET classes and they do not have the associated .NET namespaces. How to copy artifacts? Since the new versions of the BizTalk Server are going to production I am spending more and more time redesigning and refactoring the BizTalk applications. It is good to know how the refactoring process copes with the .NET namespaces. Let see what is going on with the namespaces when we copy the artifacts from one project to another. Here is an example: I am going to group the artifacts under the project folders. So, I have created a Group folder, have run the Add / Existing Item.. command and have chosen all artifacts in the project root. The artifact copies were created in the Group folder: What was happened with the namespaces of the artifacts? As you can see, the folder name, the “Group”, was added to the namespace. It is great! When I added a folder, I have added one more level in the name hierarchy and the namespace change just reflexes this hierarchy change.  The same namespace adjustment happens when we copy the BizTalk artifacts between the projects. But there is an issue with the namespace of an orchestration. It was not changed. The namespaces of the schemas, maps, pipelines are changed but not the orchestration namespace. I have to change the orchestration namespace manually. Now another example: I am creating a new Project folder and moving the artifacts there from the project root by drag and drop. We will mention the artifact namespaces are not changed. Another example: I am copying the artifacts from the project root by (drag and drop) + Ctrl. We will mention the artifact namespaces are changed. It works exactly as it was with the Add / Existing Item.. command. Conclusion: The namespace parameter is put inconsistently in different places for different artifacts Moving artifacts changes the namespaces of the schemas, maps, pipelines but not the orchestrations.

    Read the article

  • Why do I get this error trying to compile libxml2?

    - by bfaskiplar
    Although, I installed libxml2 once and reinstalled it for a few times. I cannot compile c-source code because compiler cannot find where the header file is. I am able to locate where it is (in the folder where I downloaded the tar.gz package) but I had a feeling in my guts that this package isn't installed correctly because when I tried to sudo make install, it says /bin/bash: /home/bfaskiplar/Downloads/tar.gz: No such file or directory make[2]: *** [install-libLTLIBRARIES] Error 127 make[2]: Leaving directory `/home/bfaskiplar/Downloads/tar.gz packages/libxml2-2.8.0' make[1]: *** [install-am] Error 2 make[1]: Leaving directory `/home/bfaskiplar/Downloads/tar.gz packages/libxml2-2.8.0' make: *** [install-recursive] Error 1 that's why I installed synaptic package manager and reinstalled it, but in this case, isn't it supposed to put header files in default directory where gcc normally searches in currently, I am able to compile it with -I option, but I wonder why do I have to copy headers manually even if I used synaptic for installation and why I am getting Error 1 and Error 2 when trying to install the package manually. thanks in advance

    Read the article

  • Any reason not to disable Windows kernel paging?

    - by Nathaniel
    So I'm planning on eventually going to 2 GB (mobo max) RAM from 1 GB, and I want to disable kernel paging once I do, because I've heard it can give a performance boost (and that I believe). Any reason not to do it or any general thoughts about it? Edit: for clarification, this is not disabling general RAM paging. This is disabling having kernel memory paged (or at least parts of it, as Charlls noted).

    Read the article

  • Any reason not to disable Windows kernel paging?

    - by Nathaniel
    So I'm planning on eventually going to 2 GB (mobo max) RAM from 1 GB, and I want to disable kernel paging once I do, because I've heard it can give a performance boost (and that I believe). Any reason not to do it or any general thoughts about it? Edit: for clarification, this is not disabling general RAM paging. This is disabling having kernel memory paged (or at least parts of it, as Charlls noted).

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >