Search Results

Search found 2031 results on 82 pages for 'james moore'.

Page 77/82 | < Previous Page | 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Has anyone managed to build php5-xapian on Ubuntu 12.04?

    - by jetboy
    As Xapian's been dropped from the Ubuntu repositories, I'm attempting to build my own .deb from the instructions here: http://article.gmane.org/gmane.comp.search.xapian.general/8855 http://beeznest.wordpress.com/2011/07/06/howto-build-your-own-binaries-of-php-xapian-bindings-for-debian/ I can only get things to progress beyond the first few seconds by leaving out 'rm debian/control', but if I do, it looks as if the Python and Ruby bindings are building and passing their versions of smoketest correctly. However, the PHP part of the build is failing with this error: /home/charlie/xapian-bindings-1.2.8/php/smoketest.php:38: include(xapian.php): failed to open stream: No such file or directory FAIL: smoketest.php There's a xapian.php file in /home/charlie/xapian-bindings-1.2.8/php/php5/ but if I copy it to /home/charlie/xapian-bindings-1.2.8/php/ or change the path to it in smoketest.php, the build fails right near the start with: dpkg-source: error: aborting due to unexpected upstream changes Unfortunately I'm out of my comfort zone building from source. Anyone got any ideas? Edit post James' answer: Builds fine if I follow instructions exactly. I built it on a test VM initially, but that didn't build the PHP package as PHP itself wasn't installed. Obvious gotcha, but worth mentioning. Installing generated the following error: Setting up php5-xapian (1.2.8-1) ... Processing triggers for libapache2-mod-php5 ... dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/libapache2-mod-php5.postinst): Permission denied ssion denied dpkg: error processing libapache2-mod-php5 (--install): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: libapache2-mod-php5 It's only a script for restarting Apache. Stopping Apache before running sudo dpkg -i php5-xapian_*.deb prevents the error. Xapian now shows up in phpinfo(). Job done. Thanks.

    Read the article

  • I can play "test" sounds, but no other audio works

    - by Callum
    I'm running Windows XP, and last night my PC was infected by a frustrating virus (one of those viruses that won't let you open virus checkers, etc). I finally killed it 2 hours later, but it involved some heavy duty anti-dote. One side effect is my audio is now gone. Except it's not entirely gone, because when I open the Realtek HD Audio Manager in the task bar, I can play all the "test" sounds. The speakers, the sound card, etc, are therefore working fine. But things like YouTube or Windows Media Player, there's no sound. I'm guessing there's a setting that needs to be reconfigured somewhere.. but where? Maybe relevant: One thing I did do last night was "play" with the system registry. Any help would be greatly appreciated. Thanks. SOLVED! The two hour battle with my computer virus resulted in my computer permanently thinking it was in Safe Mode, regardless of how it booted up. I was able to "fix" this by following the post by hsandler in this thread: http://www.petri.co.il/forums/showthread.php?t=23032&page=2 I then rebooted.. and let me tell you, the Windows Startup music has never sounded so sweet. Thanks to all, especially James, whose advice gave me a major clue as to what the problem was.

    Read the article

  • SFTP, Chroot problems on Redhat

    - by Curtis_w
    I'm having problems setting up sftp with a ChrootDirectory. I've done an equivalent setup on other distros, but for some reason I cannot get it to work on a Redhat AMI. The changes to my sshd_config file are: Subsystem sftp internal-sftp Match Group ftponly PasswordAuthentication yes X11Forwarding no ChrootDirectory %h ForceCommand internal-sftp AllowTcpForwarding no I have the concerned usere's homes at /home/user, owned by root. After connecting with a user in the ftponly group, I'm dropped into / without permissions for anything, and am unable to do anything. sftp bob@localhost Connecting to localhost... bob@localhost's password: sftp> pwd Remote working directory: / I can connect normally with users not in the ftponly group. openssh version 5.3 I've experimented with different permissions, as well as having users own their own home directory (gives a Write failed: Broken pipe error), and so far, nothing has seemed to work. I'm sure it's a permissions error, or something equally as trivial, but at this point my eyes are beginning to glaze over, and any help would be greatly appreciated. EDIT: James and Madhatter, thanks for clarifying. I was confused by chroot dropping me in /... just didn't think through it properly. I've added the appropriate directories and permissions to get read access. One other key part was enabling write access to chrooted homes: setsebool -P ssh_chroot_rw_homedirs on in order to get write access. I think I'm all set now. Thanks for the help.

    Read the article

  • Using Truecrypt to secure mySQL database, any pitfalls?

    - by Saul
    The objective is to secure my database data from server theft, i.e. the server is at a business office location with normal premises lock and burglar alarm, but because the data is personal healthcare data I want to ensure that if the server was stolen the data would be unavailable as encrypted. I'm exploring installing mySQL on a mounted Truecrypt encrypted volume. It all works fine, and when I power off, or just cruelly pull the plug the encrypted drive disappears. This seems a load easier than encrypting data to the database, and I understand that if there is a security hole in the web app , or a user gets physical access to a plugged in server the data is compromised, but as a sanity check , is there any good reason not to do this? @James I'm thinking in a theft scenario, its not going to be powered down nicely and so is likely to crash any DB transactions running. But then if someone steals the server I'm going to need to rely on my off site backup anyway. @tomjedrz, its kind of all sensitive, individual personal and address details linked to medical referrals/records. Would be as bad in our field as losing credit card data, but means that almost everything in the database would need encryption... so figured better to run the whole DB in an encrypted partition. If encrypt data in the tables there's got to be a key somewhere on the server I'm presuming, which seems more of a risk if the box walks. At the moment the app is configured to drop a dump of data (weekly full and then deltas only hourly using rdiff) into a directory also on the Truecrypt disk. I have an off site box running WS_FTP Pro scheduled to connect by FTPs and synch down the backup, again into a Truecrypt mounted partition.

    Read the article

  • Allow outgoing connections for DNS

    - by Jimmy
    I'm new to IPtables, but I am trying to setup a secure server to host a website and allow SSH. This is what I have so far: #!/bin/sh i=/sbin/iptables # Flush all rules $i -F $i -X # Setup default filter policy $i -P INPUT DROP $i -P OUTPUT DROP $i -P FORWARD DROP # Respond to ping requests $i -A INPUT -p icmp --icmp-type any -j ACCEPT # Force SYN checks $i -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Drop all fragments $i -A INPUT -f -j DROP # Drop XMAS packets $i -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # Drop NULL packets $i -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # Stateful inspection $i -A INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT # Allow established connections $i -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow unlimited traffic on loopback $i -A INPUT -i lo -j ACCEPT $i -A OUTPUT -o lo -j ACCEPT # Open nginx $i -A INPUT -p tcp --dport 443 -j ACCEPT $i -A INPUT -p tcp --dport 80 -j ACCEPT # Open SSH $i -A INPUT -p tcp --dport 22 -j ACCEPT However I've locked down my outgoing connections and it means I can't resolve any DNS. How do I allow that? Also, any other feedback is appreciated. James

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • The Definitive C++ Book Guide and List

    - by grepsedawk
    After more than a few questions about deciding on C++ books I thought we could make a better community wiki version. Providing QUALITY books and an approximate skill level. Maybe we can add a short blurb/description about each book that you have personally read / benefited from. Feel free to debate quality, headings, etc. Note: There is a similar post for C: The Definitive C Book Guide and List Reference Style - All Levels The C++ Programming Language - Bjarne Stroustrup C++ Standard Library Tutorial and Reference - Nicolai Josuttis Beginner Introductory: C++ Primer - Stanley Lippman / Josée Lajoie / Barbara E. Moo Accelerated C++ - Andrew Koenig / Barbara Moo Thinking in C++ - Bruce Eckel (2 volumes, 2nd is more about standard library, but still very good) Best practices: Effective C++ - Scott Meyers Effective STL - Scott Meyers Intermediate More Effective C++ - Scott Meyers Exceptional C++ - Herb Sutter More Exceptional C++ - Herb Sutter C++ Coding Standards: 101 Rules, Guidelines, and Best Practices - Herb Sutter / Andrei Alexandrescu C++ Templates The Complete Guide - David Vandevoorde / Nicolai M. Josuttis Large Scale C++ Software Design - John Lakos Above Intermediate Modern C++ Design - Andrei Alexandrescu C++ Template Metaprogramming - David Abrahams and Aleksey Gurtovoy Inside the C++ Object Model - Stanley Lippman Classics / Older Note: Some information contained within these books may not be up to date and no longer considered best practice. The Design and Evolution of C++ - Bjarne Stroustrup Ruminations on C++ Andrew Koenig / Barbara Moo Advanced C++ Programming Styles and Idioms - James Coplien

    Read the article

  • TFS Build errors TF224003, TF215085, TF215076

    - by iamdudley
    Hi, I am using TFS2008 and VS2008. I run nightly builds for about 20 applications using one build agent and the builds are scheduled for either 1am or 2am. Most of the build succeed, however 6 of them fail regularly with similar errors. The errors are either the first two below, or the third one by itself: TF215085: An error occurred while connecting to agent \xxxx\BUILDMACHINE: TF215076: Team Foundation Build on computer BUILDMACHINE (port 9191) is not responding. (Detail Message: The request was aborted: The operation has timed out.) 11/04/2010 2:10:10 AM TF224003: An exception occurred on the build computer BUILDMACHINE: The build (vstfs:///Build/Build/2632) has already completed and cannot be started again.. TF215085: An error occurred while connecting to agent \yyyyy\BA_WKSTFSBUILD: Team Foundation services are not available from server srvtfs. Technical information (for administrator): The operation has timed out It looks to me like some kind of communication error, maybe the port gets over loaded - can this happen? Should I spread the builds out a bit more? In the build definition it says "Queue the build on the default build agent at", so I figured if I scheduled them to start at the same time they would be queued and occur sequentially. Most of the suggestions I've found online for these errors are for all or nothing scenarios where no builds work at all whereas my problem is most build but some consistently do not. Judging by the dates of the last successful builds of these 6 failing builds I believe it is the same 6 failing every night. (I'm editing the build definitions now to keep the failed builds so I can get some more info on the problem) Any help on this would be much appreciated. James.

    Read the article

  • .NET "must-have" development tools

    - by nzpcmad
    James Avery wrote a classic article a while back entitled Ten Must-Have Tools Every Developer Should Download Now which is a companion to Visual Studio Add-Ins Every Developer Should Download Now and Scott Hanselman has an excellent list on his blog but if you were on a desert island and were only allowed three .NET development tools which ones would you pick? Update: Assuming you already have an IDE like Visual Studio ... Update (5) : Up to 08/01 : The current state of play: Reflector 13 Resharper 9 NUnit + TestDriven.Net 7 Refactor Pro 4 Process Explorer (other Sysinternals) 3 SnippetCompiler 3 CodeRush 3 MSDN Library 2 LinqPad 2 Cruisecontrol.net 2 VMWare 2 RhinoMocks 2 Fiddler 2 PowerShell 2 PowerCommands for VS 2008 1 Sandcastle 1 SQL Profiler 1 Redgate ANTS profiler 11 NCover 1 VisualSVN 1 Rubber Ducky 1 WinMerge 1 NAnt 1 ViEmu 1 AnkhSVN 1 dotTrace Profiler 1 BeyondCompare 1 DPack VS Plugin 1 WCF Trace Viewer (SDK) 1 xUnit.net 1 SourceGear DiffMerge 1 Ghostdoc 1 Expression Studio 1 XAML Pad 1 KaXaml 1 Blender for 3D modeling 1 Snoop a WPF tool 1 DiffMerge 1 DPack 1 NDepend 1 Kodos 1 WatiN 1 HTTPWatch Basic Edition 1 Paint.Net 1 Mole For VS 1 What I find particularly interesting about this is that "NUnit + TestDriven.Net " is right up there in third place which shows the growing emphasis on testing as an integral part of the development process rather than as an adjunct which is simply bolted on. And I'm somewhat perplexed that Codesmith didn't receive a single vote?

    Read the article

  • Top-Rated JavaScript Blogs

    - by Andreas Grech
    I am currently trying to find some blogs that talk (almost solely) on the JavaScript Language, and this is due to the fact that most of the time, bloggers with real life experience at work or at home development can explain more clearly and concisely certain quirks and hidden features than most 'Official Language Specifications' Below find a list of blogs that are JavaScript based (will update the list as more answers flow in): DHTML Kitchen, by Garrett Smith Robert's Talk, by Robert Nyman EJohn, by John Resig (of jQuery) Crockford's JavaScript Page, by Douglas Crockford Dean.edwards.name, by Dean Edwards Ajaxian, by various (@Martin) The JavaScript Weblog, by various SitePoint's JavaScript and CSS Page, by various AjaxBlog, by various Eric Lippert's Blog, by Eric Lippert (talks about JScript and JScript.Net) Web Bug Track, by various (@scunliffe) The Strange Zen Of JavaScript , by Scott Andrew Alex Russell (of Dojo) (@Eran Galperin) Ariel Flesler (@Eran Galperin) Nihilogic, by Jacob Seidelin (@llimllib) Peter's Blog, by Peter Michaux (@Borgar) Flagrant Badassery, by Steve Levithan (@Borgar) ./with Imagination, by Dustin Diaz (@Borgar) HedgerWow (@Borgar) Dreaming in Javascript, by Nosredna spudly.shuoink.com, by Stephen Sorensen Yahoo! User Interface Blog, by various (@Borgar) remy sharp's b:log, by Remy Sharp (@Borgar) JScript Blog, by the JScript Team (@Borgar) Dmitry Baranovskiy’s Web Log, by Dmitry Baranovskiy James Padolsey's Blog (@Kenny Eliasson) Perfection Kills; Exploring JavaScript by example, by Juriy Zaytsev DailyJS (@Ric) NCZOnline (@Kenny Eliasson), by Nicholas C. Zakas Which top-rated blogs am I currently missing from the above list, that you think should be imperative to any JavaScript developer to read (and follow) concurrently?

    Read the article

  • nHibernate, Automapping and Chained Abstract Classes

    - by Mr Snuffle
    I'm having some trouble using nHibernate, automapping and a class structure using multiple chains of abstract classes It's something akin to this public abstract class AbstractClassA {} public abstract class AbstractClassB : AbstractClassA {} public class ClassA : AbstractClassB {} When I attempt to build these mappings, I receive the following error "FluentNHibernate.Cfg.FluentConfigurationException was unhandled Message: An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail. Database was not configured through Database method." However, if I remove the abstract keyword from AbstractClassB, everything works fine. The problem only occurs when I have more than one abstract class in the class hierarchy. I've manually configured the automapping to include both AbstractClassA and AbstractClassB using the following binding class public class BindItemBases : IManualBinding { public void Bind(FluentNHibernate.Automapping.AutoPersistenceModel model) { model.IncludeBase<AbstractClassA>(); model.IncludeBase<AbstractClassB>(); } } I've had to do a bit of hackery to get around this, but there must be a better way to get this working. Surely nHibernate supports something like this, I just haven't figured out how to configure it right. Cheers, James

    Read the article

  • Reachability sometimes fails, even when we do have an internet connection

    - by stoutyhk
    Hi I've searched but can't see a similar question. I've added a method to check for an internet connection per the Reachability example. It works most of the time, but when installed on the iPhone, it quite often fails even when I do have internet connectivity (only when on 3G/EDGE - WiFi is OK). Basically the code below returns NO. If I switch to another app, say Mail or Safari, and connect, then switch back to the app, then the code says the internet is reachable. Kinda seems like it needs a 'nudge'. Anyone seen this before? Any ideas? Many thanks James + (BOOL) doWeHaveInternetConnection{ BOOL success; // google should always be up right?! const char *host_name = [@"google.com" cStringUsingEncoding:NSASCIIStringEncoding]; SCNetworkReachabilityRef reachability = SCNetworkReachabilityCreateWithName(NULL, host_name); SCNetworkReachabilityFlags flags; success = SCNetworkReachabilityGetFlags(reachability, &flags); BOOL isAvailable = success && (flags & kSCNetworkFlagsReachable) && !(flags & kSCNetworkFlagsConnectionRequired); if (isAvailable) { NSLog(@"Google is reachable: %d", flags); }else{ NSLog(@"Google is unreachable"); } return isAvailable; }

    Read the article

  • Nested dereferencing arrows in Perl: to omit or not to omit?

    - by DVK
    In Perl, when you have a nested data structure, it is permissible to omit de-referencing arrows to 2d and more level of nesting. In other words, the following two syntaxes are identical: my $hash_ref = { 1 => [ 11, 12, 13 ], 3 => [31, 32] }; my $elem1 = $hash_ref->{1}->[1]; my $elem2 = $hash_ref->{1}[1]; # exactly the same as above Now, my question is, is there a good reason to choose one style over the other? It seems to be a popular bone of stylistic contention (Just on SO, I accidentally bumped into this and this in the space of 5 minutes). So far, none of the usual suspects says anything definitive: perldoc merely says "you are free to omit the pointer dereferencing arrow". Conway's "Perl Best Practices" says "whenever possible, dereference with arrows", but it appears to only apply to the context of dereferencing the main reference, not optional arrows on 2d level of nested data structures. "MAstering Perl for Bioinfirmatics" author James Tisdall doesn't give very solid preference either: "The sharp-witted reader may have noticed that we seem to be omitting arrow operators between array subscripts. (After all, these are anonymous arrays of anonymous arrays of anonymous arrays, etc., so shouldn't they be written [$array-[$i]-[$j]-[$k]?) Perl allows this; only the arrow operator between the variable name and the first array subscript is required. It make things easier on the eyes and helps avoid carpal tunnel syndrome. On the other hand, you may prefer to keep the dereferencing arrows in place, to make it clear you are dealing with references. Your choice." Personally, i'm on the side of "always put arrows in, since itg's more readable and obvious tiy're dealing with a reference".

    Read the article

  • Color scheme: Smooth Dark

    - by xsl
    As requested by James McNellis I exported my Visual Studio color scheme and posted it here. It is basically the default Visual Studio color scheme with a dark background and slightly modified colors. Visual Studio 2010 Download: http://www.file-upload.net/download-2588083/Smooth-Dark.vssettings.html Installation: Select Tools > Import and Export Settings. Choose Import Selected Environment Settings. Select the file you downloaded. Import only the color settings. Click Finish. Console2 Installation In the file console.xml replace the color element with: <colors> <color id="0" r="23" g="27" b="32"/> <color id="1" r="120" g="150" b="180"/> <color id="2" r="139" g="163" b="137"/> <color id="3" r="119" g="181" b="181"/> <color id="4" r="181" g="122" b="119"/> <color id="5" r="186" g="141" b="183"/> <color id="6" r="168" g="171" b="129"/> <color id="7" r="182" g="182" b="182"/> <color id="8" r="114" g="114" b="114"/> <color id="9" r="120" g="150" b="180"/> <color id="10" r="139" g="163" b="137"/> <color id="11" r="119" g="181" b="181"/> <color id="12" r="181" g="122" b="119"/> <color id="13" r="186" g="141" b="183"/> <color id="14" r="168" g="171" b="129"/> <color id="15" r="255" g="255" b="255"/> </colors>

    Read the article

  • Detect if any USB drive is detected or if not using WinForm Application in Visual C#

    - by Pavan Kumar
    I want to do the following things in my application 1) I want to display whether any USB drive is inserted or not in my application to prompt the user to insert a USB drive. I just want to notify the user if any USB dirve is inserted else prompt him to insert one using a label or something (i want to avoid messagebox as it will keep appearing whenever a device is inserted or removed. It will be irritating for the end user) in my Visual C# WinForm Application. If any USB drive is present display "USB drive detected" in the label. The user may add one or more USB sticks but the status will remain same. When there is none then the status of the label will change to "No USB drives found.Please insert a USB drive". 2) When one or more USB drive is added the volume name with the drive letter for example "James(F:)" is added to the Combobox list. The combobox list also needs to remove the entry for the USB drive added in the list automatically when it is removed . So when there is no USB the list should be empty and the label will again prompt user to insert a USB stick or drive.

    Read the article

  • Are Django template tags cached?

    - by thebossman
    I have gone through the (painful) process of writing a custom template tag for use in Django. It is registered as an inclusion_tag so that it renders a template. However, this tag breaks as soon as I try to change something. I've tried changing the number of parameters and correspondingly changing the parameters when it's called. It's clear the new tag code isn't being loaded, because an error is thrown stating that there is a mismatch in the number of parameters, and it's evident that it's attempting to call the old function. The same problem occurs if I try to change the name of the template being rendered and correspondingly change the name of the template on disk. It continues to try to call the old template. I've tried clearing old .pyc files with no luck. Overall, the system is acting as though it's caching the template tags, likely due to the register command. I have dug through endless threads trying to find out if this is so, but all could find it James Bennett stating here that register doesn't do anything. Please help!

    Read the article

  • Detect if any USB drive is detected or not using WinForm Application in Visual C#

    - by Pavan Kumar
    I want to do the following things in my application 1) I want to display whether any USB drive is inserted or not in my application to prompt the user to insert a USB drive. I just want to notify the user if any USB dirve is inserted else prompt him to insert one using a label or something (i want to avoid messagebox as it will keep appearing whenever a device is inserted or removed. It will be irritating for the end user) in my Visual C# WinForm Application. If any USB drive is present display "USB drive detected" in the label. The user may add one or more USB sticks but the status will remain same. When there is none then the status of the label will change to "No USB drives found.Please insert a USB drive". 2) When one or more USB drive is added the volume name with the drive letter for example "James(F:)" is added to the Combobox list. The combobox list also needs to remove the entry for the USB drive added in the list automatically when it is removed . So when there is no USB the list should be empty and the label will again prompt user to insert a USB stick or drive.

    Read the article

  • ASP.Net MVC 3 Full Name In DropDownList

    - by tgriffiths
    I am getting a bit confused with this and need a little help please. I am developing a ASP.Net MVC 3 Web application using Entity Framework 4.1. I have a DropDownList on one of my Razor Views, and I wish to display a list of Full Names, for example Tom Jones Michael Jackson James Brown In my Controller I retrieve a List of User Objects, then select the FirstName and LastName of each User, and pass the data to a SelectList. List<User> Requesters = _userService.GetAllUsersByTypeIDOrgID(46, user.organisationID.Value).ToList(); var RequesterNames = from r in Requesters let person = new { UserID = r.userID, FullName = new { r.firstName, r.lastName } } orderby person.FullName ascending select person; viewModel.RequestersList = new SelectList(RequesterNames, "UserID", "FullName"); return View(viewModel); In my Razor View I have the following @Html.DropDownListFor(model => model.requesterID, Model.RequestersList, "Select", new { @class = "inpt_a"}) @Html.ValidationMessageFor(model => model.requesterID) However, when I run the code I get the following error At least one object must implement IComparable. I feel as if I am going about this the wrong way, so could someone please help with this? Thanks.

    Read the article

  • Reset.css and then a Set.css

    - by Sixfoot Studio
    I have, for a while now been using a reset.css file to reset everything before I start laying out my html designs. The reset is great in that it allows one to better control attributes such as margins, padding, line-height etc for all browsers. In essence the flatliner of css files. Now to get the heart beating again, I need a "set.css" file. So what I have done is created an Html file with all the possible elements on the page to then go and set the padding, margins etc of the h1, h2, p, td etc. I need some help with this as I am not sure what the defaults normally are. I had a look at the Firefox default css file that's used to generate all these attributes on a raw html file but it doesn't cover all the scenarios I could come up with when developing a site. Here's an example of the set.html file (a work in progress) which can be used as a lorem ipsum filler to add to your first page in a cms and then to style with a "set.css" file http://www.sixfoot.co.za/labs/Html-Css/set.html I'd appreciate it if someone knows if something like a set.css file exists or if someone could tell me what the general padding and margins are in cases like this when you have reset the css. Cheers, James

    Read the article

  • Use jquery to create a multidimensional array

    - by Simon M White
    I'd like to use jquery and a multidemensional array to show a random quote plus the name of the individual who wrote it as a separate item. I'll then be able to use css to style them differently. The quote will change upon page refresh. So far i have this code which combines the quote and the name and person who wrote it: $(document).ready(function(){ var myQuotes = new Array(); myQuotes[0] = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec in tortor mauris. Peter Jones, Dragons Den"; myQuotes[1] = "Curabitur interdum, nibh et fringilla facilisis, lacus ipsum pulvinar mauris, eu facilisis justo arcu eget diam. Duis id sagittis elit. Theo Pathetis, Dragons Den"; myQuotes[2] = "Vivamus purus purus, tincidunt et porttitor et, euismod sit amet urna. Etiam sollicitudin eros nec metus pretium scelerisque. James Caan, Dragons Den"; var myRandom = Math.floor(Math.random()*myQuotes.length); $('.quote-holder blockquote span').html(myQuotes[myRandom]); }); any help would be greatly appreciated.

    Read the article

  • Visual C++ Testing problem

    - by JamesMCCullum
    Hi there I have installed VisualAssert and cFix. I have been using Visual Studio C++ and programming in CLI/C++. I have a working Chess Game Program that works perfectly by itself.....and I have been studying testing and have many examples(with tutorials) I have found on the net, that compile and run in Visual Studio..... But as soon as I try and implement those tests on my chess game......I get this problem.... This is what its telling me 1>------ Build started: Project: ChessRound1, Configuration: Debug Win32 ------ 1>Compiling... 1>stdafx.cpp 1>C:\Program Files\VisualAssert\include\cfixpe.h(137) : error C3641: 'CfixpCrtInitEmbedding' : invalid calling convention '__cdecl ' for function compiled with /clr:pure or /clr:safe 1>C:\Program Files\VisualAssert\include\cfixpe.h(235) : error C4394: 'CfixpCrtInitEmbeddingRegistration' : per-appdomain symbol should not be marked with __declspec(allocate) 1>C:\Program Files\VisualAssert\include\cfixpe.h(235) : error C2393: 'CfixpCrtInitEmbeddingRegistration' : per-appdomain symbol cannot be allocated in segment '.CRT$XCX' 1>C:\Program Files\VisualAssert\include\cfixpe.h(244) : error C2440: 'initializing' : cannot convert from 'void (__cdecl *)(void)' to 'const CFIX_CRT_INIT_ROUTINE' 1> Address of a function yields __clrcall calling convention in /clr:pure and /clr:safe; consider using __clrcall in target type 1>C:\Program Files\VisualAssert\include\cfixpe.h(137) : error C3641: 'CfixpCrtInitEmbedding' : invalid calling convention '__cdecl ' for function compiled with /clr:pure or /clr:safe 1>Build log was saved at "file://c:\Users\james\Documents\Visual Studio 2008\Projects\ChessRound1\ChessRound1\Debug\BuildLog.htm" 1>ChessRound1 - 4 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Any ideas what I'm doing wrong? Im working with windows forms and have a heap of cpp source files. Any help would be appreciated. Thanks

    Read the article

  • Help with editing data in entity framework.

    - by Levelbit
    Title of this question is simple because there is no an easy explanation for what I'm trying to ask you. I hope you'll understand in the end :). I have to tables in my database: Company and Location (relationship:one to many) and corresponding entity sets. In my wpf application I have some datagrid which I want to fill with locations and to be able to edit every row in separate window as some form of details view (so I don't want to edit my data in datagrid). I did this by accessing Location entity from selected row and creating a new Location entity and then I copy properties from original entity to newly created. Something like cloning the object. After editing if I press OK changed data is copied to original object back, and if I press Cancel nothing happens. Of course, you probably thinking I could use NoTracking option and AttachAsModified method as was mentioned as solution in some earlier questions(see:http://stackoverflow.com/questions/803022/changing-entities-in-the-entityframework) by Alex James, but lets say I had some problems about that and I have my own reasons for doing this. Finally, because navigation property(Company) of my location entity is assigned to newly created location object(during cloning) from some reason in object context additional object as copy of object I want to edit from datagrid is created without my will(similar problem :http://blogs.msdn.com/alexj/archive/2009/12/02/tip-47-how-fix-up-can-make-it-hard-to-change-relationships.aspx). So, when I do ObjectContext.SaveChanges it inserts additional row of data into my database table Location, the same as the one i wanted to edit. I read about sth like this, but I don't quite understand why is that and how to block or override this. I hope I was clear as I could. Please, solutions or some other ideas.

    Read the article

  • Embedding a CMS in an MVC Web App

    - by Mr Snuffle
    I'm working on a website for searching for businesses, then displaying a listing page. We've been toying with the idea of letting the clients manage their listing page using an external CMS. I'm not sure how often this is done, or if it's even best practice. Ideally, we want to be able to setup a listing on our website, then give the clients access to an external CRM when they can manage their listing page. We then want to embed this custom page within our website, possibly using an iframe (which will come along with it's own set of complications). We'd like this integration to be as seamless as possible. I'd personally prefer it if we could directly inject the HTML into our own page and bypass an iframe all together, but I don't know of any CMS hosting services that provide the interface for such a thing. We've experimented a little with Squarespace, and we can get a fairly clean version of someone's page which would be well suited for an iframe. I'm wondering if anyone else has looked and integrating an external hosting CMS into a website (in this case, we're using ASP.NET MVC). We'd also want to automate the creation of accounts on this external CMS, so when a user signed up we could just point them to the website with some login details. I have no idea if anyone offers a service like this, but any recommendations would be greatly appreciated. We could host a service ourself too, but the aim is to have an external system that clients can use to manage their pages. Cheers, James

    Read the article

  • C# WPF DataBind boolean

    - by jtdangelo
    I have been using this website to learn a lot about C# for a while, but this is my first time posting a question. I look forward to hearing back from some of the seasoned C# veterans! I have been working on a C# 4.0 WPF project and need to figure out how to databind a boolean value. I have a reference to my Application.Current object in a window. My "App" object contains a boolean field called "Downloaded" that is true if the user has downloaded information from a web service. I need to databind a textbox's IsEnabled field to this Downloaded value. Any tips? Here is what I have come up with so far. (Any useful links to better learn WPF XAML are greatly appreciated!) C# Code: class MainWindow : Window { ... private App MyApp = App.Current as App; ... } XAML: <TextBox ... IsEnabled="{Binding Source=MyApp, Path=Downloaded}" /> James

    Read the article

  • Converting table columns to key value pairs

    - by TomD1
    I am writing a PL/SQL procedure that loads some data from Schema A into Schema B. They are both very different schemas and I can't change the structure of Schema B. Columns in various tables in Schema A (joined together in a view) need to be inserted into Schema B as key=value pairs in 2 columns in a table, each on a separate row. For example, an employee's first name might be present as employee.firstname in Schema A, but would need to be entered in Schema B as: id=>1, key=>'A123', value=>'Smith' There are almost 100 keys, with the potential for more to be added in future. This means I don't really want to hardcode any of these keys. Sample code: create table schema_a_employees ( emp_id number(8,0), firstname varchar2(50), surname varchar2(50) ); insert into schema_a_employees values ( 1, 'James', 'Smith' ); insert into schema_a_employees values ( 2, 'Fred', 'Jones' ); create table schema_b_values ( emp_id number(8,0), the_key varchar2(5), the_value varchar2(200) ); I thought an elegant solution would most likely involve a lookup table to determine what value to insert for each key, and doesn't involve effectively hardcoding dozens of similar statements like.... insert into schema_b_values ( 1, 'A123', v_firstname ); insert into schema_b_values ( 1, 'B123', v_surname ); What I'd like to be able to do is have a local lookup table in Schema A that lists all the keys from Schema B, along with a column that gives the name of the column in the table in Schema A that should be used to populate, e.g. key "A123" in Schema B should be populated with the value of the column "firstname" in Schema A, e.g. create table schema_a_lookup ( the_key varchar2(5), the_local_field_name varchar2(50) ); insert into schema_a_lookup values ( 'A123', 'firstname' ); insert into schema_a_lookup values ( 'B123', 'surname' ); But I'm not sure how I could dynamically use values from the lookup table to tell Oracle which columns to use. So my question is, is there an elegant solution to populate schema_b_values table with the data from schema_a_employees without hardcoding for every possible key (i.e. A123, B123, etc)? Cheers.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82  | Next Page >