Search Results

Search found 444 results on 18 pages for 'sensible eddie'.

Page 9/18 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Single-developer GIT workflow (moving from straightforward FTP)

    - by melat0nin
    I'm trying to decide whether moving to VCS is sensible for me. I am a single web developer in a small organisation (5 people). I'm thinking of VCS (Git) for these reasons: version control, offsite backup, centralised code repository (can access from home). At the moment I work on a live server generally. I FTP in, make my edits and save them, then reupload and refresh. The edits are usually to theme/plugin files for CMSes (e.g. concrete5 or Wordpress). This works well but provides no backup and no version control. I'm wondering how best to integrate VCS into this procedure. I would envisage setting up a Git server on the company's web server, but I'm not clear how to push changes out to client accounts (usually VPSes on the same server) - at the moment I simply log into SFTP with their details and make the changes directly. I'm also not sure what would sensibly represent a repository - would each client's website get their own one? Any insights or experience would be really helpful. I don't think I need the full power of Git by any means, but basic version control and de facto cloud access would be really useful.

    Read the article

  • Is it appropriate to run a complex enterprise-system configuration and migration project in a similar way to a Scrum development project?

    - by AndyM
    I'm just starting out on the implementation of a large enterprise-wide system, which has complex requirements and many stakeholders. The company has been through high-level evaluation and tender process and determined to purchase a highly configurable "off-the-shelf" product rather than building an entirely bespoke system. The system will replace several existing systems and will require a significant amount of data migration. I'm thinking that the implementation of this system (which is expected to take over 2 years) could be run in a similar way to a Scrum software development project. With the first sprints targeted at building the minimal possible functionality needed (across all functional areas), and then iteratively deepening the level of functionality according the stakeholder feedback. I think this will de-risk the project and help ensure a balance of stakeholder needs within the available time. The user stories are still the same, it's just that to implement them we have work within the constraints of the pre-purchased system. When it comes to 'building stuff', instead of writing custom code the team will be configuring the off-the-shelf package, writing data conversion scripts and the like (and it should be a lot quicker!). Does this sound like a sensible approach? Does the Agile approach makes sense here?

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

  • How are certain analytics metrics (time on site, etc.) usually distributed?

    - by a barking spider
    I'm not sure if I've come to the right place to ask this question, but I'm gathering some information for a research project. We're trying to design an experiment that'll heavily involve web analytics, and I'm trying to figure out some sensible values of mean +/- standard deviation for the following visitor-level (i.e., visitor 1 spends 2 minutes on site, visitor 2 spends 1 minute -- mean 1.5 +/- 0.71...) metrics: time spent on site page views If time allowed, we would put up the sites and gather the information ourselves, but we have a grant deadline coming up. I realize that even though these the distributions of these quantities are probably going to be heavily skewed towards zero, we'll need some reasonable figures or estimates of these figures in order to do sample size calculations, etc. Anyway, I'm not sure where else I'd turn, and I certainly have had a difficult time finding these values in the prior literature. If someone could direct me to a paper with the right information, or if you have these figures on hand (perhaps taken directly from your logs!) -- that would be amazing, and I'd love to hear from you. Thanks in advance, and even though I'm not allowed to reveal too much, rest assured that this info'll be applied towards a good cause :)

    Read the article

  • Which powerful laptop, with UK keyboard and 8gb ram

    - by RobinL
    I've been searching high and low for high spec laptops compatible with Ubuntu. The lack of coherent information on the topic is high (considering the number of people who apparently want a good laptop with an OS operating system). So I thought you may have some advice. My requirements: a) has = 8Gb ram b) is compatible with Ubuntu c) has a UK keyboard and charger d) does not cost the Earth Which would you go for? Does anyone have good experience with high-end laptops running Ubuntu? So here's some background research: Samsung Series 7 looks great, but has various problems on Ubuntu, including: poor battery life, touchpad does not work, graphics card not fully supported and sucks power when it does (see [here] and [here], for example). Other options on the [wish list] include: the sensible [Acer] (possibly n.1 choice, but not sure about graphics card compatibility or battery), a nice looking [HP Pavilion dv6-6c56ea], which also has incompatibility issues (see [here] and [here] and check ubuntuforums) And another [Acer] which may be best due to its simplicity and cheapness. Other sub-questions: didn't Dell offer Ubuntu support for decent laptops (above 6Gb ram their offerings are scarce); what about pre-installed options such as those provided by System76? If it weren't for the UK keyboard and charger, I'd probably go for this [amazing-looking] [machine]. Many thanks for any advice, P.s. Apologies for lack of hyperlinks; I'm a noob so only allowed 2 :( All 10 links are available here though for the interested reader :) Robin

    Read the article

  • How should I structure a site with content dependent on visitor type (not user)?

    - by Pedr
    I have a website that displays different content depending on two selections made by a visitor: Whether they are a teacher or student, and their learning level (from 4 options). Everything is public and they don't need to authenticate to access the content. Depending on their selection, different content is displayed across the whole site, other than a contact and about page. The tone of the language changes depending on whether the visitor is a student or teacher and the materials available on each page also change depending on the learning level, however in all cases, the structure of the site is identical. Currently I'm using a cookie to store the visitor's selections and render different content appropriately, so I have a single set of URLs which display different content depending on the cookie, with one of the permutations as default. I appreciate this is far from ideal, but what is the better option? Would I be better using a distinguishing segment for each selection, for example: http://example.com/teacher/lv3/resources/activities http://example.com/teacher/lv4/resources/activities http://example.com/student/lv4/resources/activities etc. What is the most sensible way to handle this situation?

    Read the article

  • Challenge 19 – An Explanation of a Query

    - by Dave Ballantyne
    I have received a number of requests for an explanation of my winning query of TSQL Challenge 19. This involved traversing a hierarchy of employees and rolling a count of orders from subordinates up to superiors. The first concept I shall address is the hierarchyId , which is constructed within the CTE called cteTree.   cteTree is a recursive cte that will expand the parent-child hierarchy of the personnel in the table @emp.  One useful feature with a recursive cte is that data can be ‘passed’ from the parent to the child data.  The hierarchyId column is similar to the hierarchyId data type that was introduced in SQL Server 2008 and represents the position of the person within the organisation. Let us start with a simplistic example Albert manages Bob and Eddie.  Bob manages Carl and Dave. The hierarchyId will represent each person’s position in this relationship in a single field.  In this simple example we could append the userID together into a varchar field as detailed below. This will enable us to select a branch of the tree by filtering using Where hierarchyId  ‘1,2%’ to select Bob and all his subordinates.  Naturally, this is not comprehensive enough to provide a full solution, but as opposed to concatenating the Id’s together into a varchar datatyped column, we can apply the same theory to a varbinary.  By CASTing the ID’s into a datatype of varbinary(4) ,4 is used as 4 bytes of data are used to store an integer and building a hierarchyId  from those.  For example: The important point to bear in mind for later in the query is that the binary data generated is 'byte order comparable'. ie We can ORDER a dataset with it and the resulting data, will be in the order required. Now, would probably be a good time to download the example file and, after the cte ‘cteTree’, uncomment the line ‘select * from cteTree’.  Mark this and all prior code and execute.  This will show you how this theory directly relates to the actual challenge data.  The only deviation from the above, is that instead of using the ID of an employee, I have used the row_number() ranking function to order each level by LastName,Firstname.  This enables me to order by the HierarchyId in the final result set so that the result set is in the required order. Your output should be something like the below.  Notice also the ‘Level’ Column that contains the depth that the employee is within the tree.  I would encourage you to ‘play’ with the query, change the order in the row_number() or the length of the cast in the hierarchyId to see how that effects the outcome.  The next cte, ‘cteTreeWithOrderCount’, is a join between cteTree and the @ord table, and COUNT’s the number of orders per employee.  A LEFT JOIN is employed here to account for the occasion where an employee has made no sales.   Executing a ‘Select * from cteTreeWithOrderCount’ will return the result set as below.  The order here is unimportant as this is only a staging point of the data and only the final result set in a cte chain needs an Order by clause, unless TOP is utilised. cteExplode joins the above result set to the tally table (Nums) for Level Occurances.  So, if level is 2 then 2 rows are required.  This is done to expand the dataset, to create a new column (PathInc), which is the (n+1) integers contained within the heirarchyid.  For example, with the data for Robert King as given above, the below 3 rows will be returned. From this you can see that the pathinc column now contains the values for Andrew Fuller and Steven Buchanan who are Robert King’s superiors within the tree.    Finally cteSumUp, sums the orders for each person and their subordinates using the PathInc generated above, and the final select does the final simple mathematics and filters to restrict the result set to only the ‘original’ row per employee.

    Read the article

  • Java & android: Help linking an item in a listView to its correct view, but not the way i know of.

    - by Capsud
    Hi, i'm developing an android app, and what i have is a String array of restaurants in one class... static final String[] AtoZ = new String[] { "Ananda", "Brambles Cafe", "Brannigans", "Buona Sera", "Cafe Mao", "Cafe Mimo", "Dante", "Eddie Rockets", "Frango's World Cuisine", "Nando's", "Overends Restaurant @ Airfield House", "Pizza Hut", "Roly Saul", "Siam Thai","Smokey Joes","Sohag Tandoori", "TGI Friday","The Rockfield Lounge", "Winters Bar", "Al Boschetto","Baan Thai", "Bella Cuba", "Bellamys","Bianconis","Canal Bank Cafe", "Canalettos Restaurant","Chandni Restaurant", "Chill Out Cafe", "Crowes", "Da Vincenzo", "Druids", "Dylan", "Epic Restaurant", "Jewel in the Crown", "Juniors", "Kanum Thai","Kites", "Koishi","Maia Restaurant", "Mangetu Restaurant", "Millers Pizza Kitchens", "O'Connells Restaurant", "Ocras Restaurant", "Orchid Szechuan Restaurant", "Roly's Bistro", "Ryans Beggars Bush", }; i have created a view for each of these restaurants aswell in my layouts folder. so this array is going to be displayed in a listView in my android app. What i want to know is what is the quickest way of linking the item clicked to its correct view, without having to type out each position in the array and have a serious of if statements which would take a year with this! i dont want to be doing something like this if(position == 1){ setContentView(R.layout.bentleys); as it would take a year doing that for each one... Please help. thanks alot.

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • iptables (NAT/PAT) setup for SSH & Samba

    - by IanVaughan
    I need to access a Linux box via SSH & Samba that is hidden/connected behind another one. Setup :- A switch B C |----| |---| |----| |----| |eth0|----| |----|eth0| | | |----| |---| |eth1|----|eth1| |----| |----| Eg, SSH/Samba from A to C How does one go about this? I was thinking that it cannot be done via IP alone? Or can it? Could B say "hi on eth0, if your looking for 192.168.0.2, its here on eth1"? Is this NAT? This is a large private network, so what about if another PC has that IP?! More likely it would be PAT? A would say "hi 192.168.109.15:1234" B would say "hi on eth0, traffic for port 1234 goes on here eth1" How could that be done? And would the SSH/Samba demons see the correct packet header info and work?? IP info :- A - eth0 - 192.168.109.2 B - eth0 - B1 = 192.168.109.15 B2 = 172.24.40.130 - eth1 - 192.168.0.1 C - eth1 - 192.168.0.2 A, B & C are RHEL (RedHat) But Windows computers can be connected to the switch. I configured the 192.168.0.* IPs, they are changeable. Update after response from Eddie Few problems (and Machines' B IP is different!) From A :- ssh 172.24.40.130 works ok, (can get to B2) but ssh 172.24.40.130 -p 2022 -vv times out with :- OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 172.24.40.130 [172.24.40.130] port 2022. ...wait ages... debug1: connect to address 172.24.40.130 port 2022: Connection timed out ssh: connect to host 172.24.40.130 port 2022: Connection timed out From B2 :- $ service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 192.168.0.2 tcp dpt:22 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Table: nat Chain PREROUTING (policy ACCEPT) num target prot opt source destination 1 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2022 to:192.168.0.2:22 Chain POSTROUTING (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination And ssh from B2 to C works fine :- $ ssh 192.168.0.2 Route info :- $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 * 255.255.255.0 U 0 0 0 eth1 172.24.40.0 * 255.255.255.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth1 default 172.24.40.1 0.0.0.0 UG 0 0 0 eth0 $ ip route 192.168.0.0/24 dev eth1 proto kernel scope link src 192.168.0.1 172.24.40.0/24 dev eth0 proto kernel scope link src 172.24.40.130 169.254.0.0/16 dev eth1 scope link default via 172.24.40.1 dev eth0 So I just dont know why the port forward doesnt work from A to B2?

    Read the article

  • Learning WPF and MVVM - best approach for learning from scratch

    - by bplus
    Hello, I've got about three years c# experience. I'd like to learn some WPF and the MVVM pattern. There are a lot of links to articles on this site but I'm getting a little overwhelmed. Would a sensible approach for a begginer to be forget mvvm for a while and just quickly learn a bit a of WPF, then come back to MVVM? I had a leaf through this book in work today, it doesn't seem to mention MVVM (at least not in the index). I was pretty surprised by this as I thought MVVM was supposed to be the "lingua franca" of WPF? Also I've just started working at a new company and they are using MVVM with WinForms, has anyone come across this before? Can anyone recommend a book that will teach me both WPF and MVVM?

    Read the article

  • Prevent Python from caching the imported modules

    - by Olivier
    While developing a largeish project (split in several files and folders) in Python with IPython, I run into the trouble of cached imported modules. The problem is that instructions import module only reads the module once, even if that module has changed! So each time I change something in my package, I have to quit and restart IPython. Painful. Is there any way to properly force reloading some modules? Or, better, to somehow prevent Python from caching them? I tried several approaches, but none works. In particular I run into really, really weird bugs, like some modules or variables mysteriously becoming equal to None... The only sensible resource I found is Reloading Python modules, from pyunit, but I have not checked it. I would like something like that. A good alternative would be for IPython to restart, or restart the Python interpreter somehow. So, if you develop in Python, what solution have you found to this problem?

    Read the article

  • How to integrate KVC in MVC?

    - by Paperflyer
    So I have an MVC-application in Cocoa. There are some custom views, a controller and a model. Of course, the views need to know some stuff, so they get their data from the controller. However, they do not use accessors in the controller, they use KVC with a keypath that calls right through to the model: // In view.m time = [timeSource valueForKeyPath:@"theModel.currentTime"]; // timeSource is a pseudo-delegate of the view that holds the controller This simplifies things a great deal and technically, the views still don't know the model in person (that is, in pointer). But of course, they access it directly. Is that a usual (or at least sensible) usage of KVC and MVC? Or how would you implement this kind of communication?

    Read the article

  • NetBeans and Eclipse-like "run configurations"

    - by auramo
    Is it possible to create anything similar to Eclipse's "run configurations" in NetBeans? I am working on a huge project which is currently not divided into any subprojects in Eclipse. There are in fact many applications in the project which have their own main-methods and separate classpaths. I know, it's a mess. I'm considering about migrating the project to NetBeans. In the long run it would be sensible to create many projects but for now it would be a real life-saver if I could do similar stuff in NetBeans than in Eclipse: create "launchers" which have their own classpaths. Is this possible? If it's easy to emulate this behaviour with "external" projects, hints about that are welcome as well.

    Read the article

  • Connect to SVN repository with Netbeans using SVN+SSH

    - by shuby_rocks
    Hello all, I am trying to connect to a SVN server in order to import my project into it with svn+ssh authentication method. I am using the NetBeans IDE (6.8) with subversion plugin installed on Windows XP SP2. I have plink installed with its path set in the Windows PATH env variable. When I use the similar looking repository URL (XXXX and YYYY replaced with sensible things) svn+ssh://XXXX@YYYY/home/dce/svn/trunk along with this external tunnel command plink -l <myUserName> -i C:\\privateKey.ppk I keep getting this error: org.tigris.subversion.javahl.ClientException: Network connection closed unexpectedly I searched about it on the Internet and tried many things but didn't work out. Please help if anybody has some idea what may be going wrong. Thanks a lot in advance.

    Read the article

  • SQL - re-arrange a table via query

    - by abelenky
    I have a poorly designed table that I inherited. It looks like: User Field Value ------------------- 1 name Aaron 1 email [email protected] 1 phone 800-555-4545 2 name Mike 2 email [email protected] 2 phone 777-123-4567 (etc, etc) I would love to extract this data via a query in the more sensible format: User Name Email Phone ------------------------------------------- 1 Aaron [email protected] 800-555-4545 2 Mike [email protected] 777-123-4567 I'm a SQL novice, but have tried several queries with variations of Group By, all without anything even close to success. Is there a SQL technique to make this easy?

    Read the article

  • What are jQuery best practices regarding Ajax convenience methods and error handling?

    - by JonathanHayward
    Let's suppose, for an example, that I want to partly clone Gmail's interface with jQuery Ajax and implement periodic auto-saving as well as sending. And in particular, let us suppose that I care about error handling, expecting network and other errors, and instead of just being optimistic I want sensible handling of different errors. If I use the "low-level" feature of $.ajax() then it's clear how to specify an error callback, but the convenience methods of $.get(), $.post(), and .load() do not allow an error callback to be specified. What are the best practices for pessimistic error handling? Is it by registering a .ajaxError() with certain wrapped sets, or an introspection-style global error handler in $.ajaxSetup()? What would the relevant portions of code look like to initiate an autosave so that a "could not autosave" type warning is displayed if an attempted autosave fails, and perhaps a message that is customized to the type of error? Thanks,

    Read the article

  • How can I differentiate between the .net WebBrowser component and an actual browser?

    - by Septih
    OK, so we have an online downloads store accessed via our software. Recently we've had requests to allow downloads via normal browsers and it's fairly easy just to slap a download page on. The problem is that it would be confusing to people having two download links, one for the software and one for their web browser, so we want to differentiate between the two and only show the relevant download link. From what I've gathered, the .net WebBrowser component is the same as IE and uses the same User Agent, so we can't use that unless we subclass the WebBrowser in the software to make it use a specific User Agent. It's the more sensible option, but we'd have to roll out another updated version, which is less than ideal. Are there any other ways to tell if someone's accessing a site via the .net component? My only other alternative is to copy the store to a different address with the different download links and send people there. Again this is doable, but not ideal.

    Read the article

  • QFileDialog and german umlaute within a path

    - by MB
    Hey Everybody, i am working on a project, which i am developing with Python and PyQT4. I have stumbled upon a somewhat odd behaviour of the QFileDialog, that is not occuring when running the project within in my IDE (Eclipse). The problem is that QFileDialog in ExistingFiles-mode does fail to return the list of selected files, when one of the file paths is containing a german umlaut (ä,ü,ö, etc.) The QFileDialog is not offering options or parameters to make it sensible regarding this scenario. Does anyone have any ideas of how to tackle this issue?

    Read the article

  • When is a PHP project too small for a framework?

    - by Jonathan Nicol
    I'm about to start on a small, static website project: no database or CMS required. Basically, a brochure website. I used the CodeIgniter framework recently to develop a full-blown web application, and I'm wondering if it appropriate to also use CI for smaller, simpler sites. Typically for a static brochure site I would write regular PHP pages with a few includes thrown in to save on repetition (i.e. HTML with a sprinking of PHP), but this time around I'm wondering if my new friend CodeIgniter might be able to streamline the development process. Is it sensible to consider a framework for such a simple project, or is it overkill? I'm worried that I might be the proverbial carpenter whose only tool is a hammer, and sees every problem as a nail!

    Read the article

  • Normalize or Denormalize in high traffic websites

    - by Inam Jameel
    what is the best practice for database design for high traffic websites like this one stackoverflow? should one must use normalize database for record keeping or normalized technique or combination of both? is it sensible to design normalize database as main database for record keeping to reduce redundancy and at the same time maintain another denormalized form of database for fast searching? or main database should be denormalize and one can make normalized views in the application level for fast database operations? or beside above mentioned approach? what is the best practice of designing high traffic websites???

    Read the article

  • Using Parallel Extensions with ThreadStatic attribute. Could it leak memory?

    - by the-locster
    I'm using Parallel Extensions fairly heavily and I've just now encountered a case where using thread locla storrage might be sensible to allow re-use of objects by worker threads. As such I was lookign at the ThreadStatic attribute which marks a static field/variable as having a unique value per thread. It seems to me that it would be unwise to use PE with the ThreadStatic attribute without any guarantee of thread re-use by PE. That is, if threads are created and destroyed to some degree would the variables (and thus objects they point to) remain in thread local storage for some indeterminate amount of time, thus causing a memory leak? Or perhaps the thread storage is tied to the threads and disposed of when the threads are disposed? But then you still potentially have threads in a pool that are longed lived and that accumulate thread local storage from various pieces of code the threads are used for. Is there a better approach to obtaining thread local storage with PE? Thankyou.

    Read the article

  • built-in schema datatype for html / xhtml

    - by John Clements
    Is there a built-in schema datatype for xhtml data? Suppose I want to specify a "boozle" element that contains two "woozles", each of which is arbitrary xhtml. I want to write something like this, using the relax NG compact syntax: namespace nifty = "http://brinckerhoff.org/nifty/" start = element nifty:boozle {woozle, woozle} woozle = element nifty:woozle {xhtml} Unfortunately, xmllint then signals this error: ./lab.rng:43: element ref: Relax-NG parser error : Reference xhtml has no matching definition ./lab.rng:43: element ref: Relax-NG parser error : Internal found no define for ref xhtml So my question is this: is there something sensible that I should put in place of "xhtml" above?

    Read the article

  • How to set up 3rd party developer portal

    - by Michael
    I am developing a web service and depend on 3rd party developers to write client applications for it. I need to set a developer portal, a web site where existing and potential developers would find documentation html pages doc files for download libraries downloads wiki forums support ticketing system. There should be a public part and a part protected by login. I want only logged in users to submit tickets, for example. I don't want to host it. I would prefer a generic design based on a sensible template, where I can add minimal customization, such as logo. I don't want to add code to get any of the above functionality. I will do all that if necessary, but I'd hope there would be an online service to do thing like that. What services would you guys recommend?

    Read the article

  • SQL Loop over a family tree

    - by simon831
    Using SQL server 2008. I have a family tree of animals stored in a table, and want to give some information on how 'genetically diverse' (or not) the offspring is. In SQL how can I produce sensible metrics to show how closely related the parents are? Perhaps some sort of percentage of shared blood, or a number of generations to go back before there is a shared ancestor? AnimalTable Id Name mumId dadId select * from AnimalTable child inner join AnimalTable mum on child.[mumId] = mum.[Id] inner join AnimalTable dad on child.[dadId] = dad.[Id] inner join AnimalTable mums_mum on mum.[mumId] = mums_mum.[Id] inner join AnimalTable mums_dad on mum.[dadId] = mums_dad.[Id] inner join AnimalTable dads_mum on dad.[mumId] = dads_mum.[Id] inner join AnimalTable dads_dad on dad.[dadId] = dads_dad.[Id]

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >