Search Results

Search found 7854 results on 315 pages for 'resource dictionary'.

Page 193/315 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Parse raw HTTP Headers

    - by Cev
    I have a string of raw HTTP and I would like to represent the fields in an object. Is there any way to parse the individual headers from an HTTP string? 'GET /search?sourceid=chrome&ie=UTF-8&q=ergterst HTTP/1.1\r\nHost: www.google.com\r\nConnection: keep-alive\r\nAccept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5\r\nUser-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.45 Safari/534.13\r\nAccept-Encoding: gzip,deflate,sdch\r\nAvail-Dictionary: GeNLY2f-\r\nAccept-Language: en-US,en;q=0.8\r\n [...]'

    Read the article

  • Which are the best techniques to protect a 'homemade' framework from unlogged visitors?

    - by Hermet
    First of all, I would like to say that I have used the search box looking for a similar question unsuccessfully, maybe because of my poor english skills. The way I currently do this is checking in every single page that a session has been opened. If not, the user gets redirected to a 404 page, to seem like the file which has been requested doesn't exist. I really don't know if this is sure or there's a better and more safety way and I'm currently working with kind of confidential data that should never become public. Could you give me some tips? Or leave a link where I could find some? Thank you very much, and again excuse me for kicking the dictionary.

    Read the article

  • overload environment

    - by Richo
    I've recently switched across to nesting my home directory across all my machines in an svn repo, meaning that my utility scripts, configuration (irssi, vim, zsh, screen etc) as well as my .profile and so forth are easier to keep up to date across all the places I login. I use a set of sourced .local files to override them on a per site basis as required. As it stands, many of my scripts inherit some form of configuration, and for the most part I've been setting an environment variable in .profile, and then if needed on a per site basis overriding it in .profile.local This works great, but are there pitfalls in having a stack of environment variables? If I take my default environment from within an X session before any of my personal configuration I have not even increased it by 50% but some of the machines I work on are low resource, am I bloating my system unneccessarily, or being needlessly paranoid? Should I start moving this config into seperate flatfiles that are loaded as needed? This means extra infrastructure, or alternately writing a single module for storing config that all of my utilities can inherit.

    Read the article

  • Hyper V cluster - one VM won't migrate

    - by Chris W
    We have a Failover Cluster built up on 6 blades, each running Hyper V. Each box is running Server 2008 R2. We've got a number of VMs running that all have the same basic config: VHD stored on a cluster shared volume. 2 virtual NICs (1 for LAN connection and 1 for SAN connection). All of our VMs will happily migrate between any other blade apart from one single VM which is running fine on it's current blade but will not migrate to any other location. What could be the cause of it or where should I look to get a detailed error message as I can't seem to find much information logged in any of the logs. Edit: I know the usual culprit is mis-matching resource names. We've already been there with the NICs named differently on some of the blades. As far as we can tell now everything looks to be identical on each bit of metal.

    Read the article

  • How To Query Many-to-Many Table (one table's values becomes column headers)

    - by CRice
    Given this table structure, I want to flatten out the many-to-many relationships and make the values in the Name field of one table into column headers and the quantities from the same table into column values. The current idea which will work is to put the values into a Dictionary (hashtable) and represent this data in code but im wondering if there is a SQL way to do this. I am also using Linq-to-SQL for data access so a Linq-to-SQL solution would be ideal. [TableA] (int Id) [TableB] (int id, string Name) [TableAB] (int tableAId, int tableBId, int Quantity) fk: TableA.Id joins to TableAB.tableAId fk: TableB.Id joins to TableAB.tableBId Is there a way I can query the three tables and return one result for example: TableA [Id] 1 TableB [Id], [Name] 1, "Red" 2, "Green" 3, "Blue" TableAB [TableAId], [TableBId], [Quantity] 1 1 5 1 2 6 1 3 7 Query Result: [TableA.Id], [Red], [Green], [Blue] 1, 5, 6, 7

    Read the article

  • Plotting a cumulative graph of python datetimes

    - by ventolin
    Say I have a list of datetimes, and we know each datetime to be the recorded time of an event happening. Is it possible in matplotlib to graph the frequency of this event occuring over time, showing this data in a cumulative graph (so that each point is greater or equal to all of the points that went before it), without preprocessing this list? (e.g. passing datetime objects directly to some wonderful matplotlib function) Or do I need to turn this list of datetimes into a list of dictionary items, such as: {"year": 1998, "month": 12, "date": 15, "events": 92} and then generate a graph from this list? Sorry if this seems like a silly question - I'm not all too familiar with matplotlib, and would like to save myself the effort of doing this the latter way if matplotlib can already deal with datetime objects itself.

    Read the article

  • How to find links and modify an Html using BeautifulSoup in Python

    - by systempuntoout
    Starting from an Html input like this: <p> <a href="http://www.foo.com">this if foo</a> <a href="http://www.bar.com">this if bar</a> </p> using BeautifulSoup, i would like to change this Html in: <p> <a href="http://www.foo.com">this if foo[1]</a> <a href="http://www.bar.com">this if bar[2]</a> </p> saving parsed links in a dictionary with a result like this: links_dict = {"1":"http://www.foo.com","2":"http://www.bar.com"} Is it possible to do this using BeautifulSoup? Any valid alternative?

    Read the article

  • Oracle redo log performance degradation when inserting

    - by Aldarund
    I have a oracle 11g database. I'm testing in for inserts. The database running in noarchive mode. I have 3 redo log configured, each 2gb. I'm trying to insert data into test table. At begin it goes fine with 15k ins/second. I make a commit after 200 inserts. But after about 1.3m inserted records it become really slow, about 1-2k ins/second. As i noticed in resource explorer at this point we have filled all redo logs and so the inserts from this points work slow. So my question is why it become so slow when it fills redo logs, even if i commit each 200 records. And how this situation can be fixed ( except the turning off logging completely at inserts)

    Read the article

  • Test sorting through QTP

    - by onkar
    There is a java table in my application which has 1 column & 5 rows. Contents of rows are as below. These contents are arranged in descending order 172-18-zfs MKTLAB NFSVOL datastore1 datastore1(1) after clicking on column header it get sort in asceding order & order is like this datastore1(1) datastore1 NFSVOL MKTLAB 172-18-zfs Through QTP i want to check whether this sortig is correct or not. I have used sort() method of dictionary but it doesnt give expected result. It just sort according to alphabatical order. In expected sorting order 1st priority should be to small letter then to capital letter then to number.

    Read the article

  • Converting one alphabet to another

    - by Branimir
    I am preparing a simple dictionary project and I have prepared it to search in 2 languages. Still, as one of them is using Cyrillic letters, I have to be able to check in the database, even if the word is written in Latin letters. What do I mean in Cyrillic: ??? in Latin: kon Both should give the description of the word (in this case "horse"). I have been thinking of using two structures with two alphabets in order to achieve this....Still somehow I cannot get it. Does some of you have experience in a similar situation? If yes - please share an advice or code-sample.

    Read the article

  • Can the lock function be used to implement thread-safe enumeration?

    - by Daniel
    I'm working on a thread-safe collection that uses Dictionary as a backing store. In C# you can do the following: private IEnumerable<KeyValuePair<K, V>> Enumerate() { if (_synchronize) { lock (_locker) { foreach (var entry in _dict) yield return entry; } } else { foreach (var entry in _dict) yield return entry; } } The only way I've found to do this in F# is using Monitor, e.g.: let enumerate() = if synchronize then seq { System.Threading.Monitor.Enter(locker) try for entry in dict -> entry finally System.Threading.Monitor.Exit(locker) } else seq { for entry in dict -> entry } Can this be done using the lock function? Or, is there a better way to do this in general? I don't think returning a copy of the collection for iteration will work because I need absolute synchronization.

    Read the article

  • MS Clear Screen Saver doesn't work

    - by ufoq
    I have a problem with really exotic thing - Microsoft Clear Screen Saver. As it's name suggests, it's a screen saver, that's transparent (MS posted it as a part of W2k Resource Kit). When you move the mouse/hit a key, the "lock" dialog appears. I would like to use this to view servers desktops without need to log in. I tested it on my XP's, and it works flawlessy. But on W2K3 servers it doesn't work. After the screensaver timeout, the error message is displayed: "The Clear Screen Saver cannot display the user desktop after the workstation has been locked"

    Read the article

  • Do Managers in Python Multiprocessing module lock the shared data?

    - by AnonProcess
    This Question has been asked before: http://stackoverflow.com/questions/2936626/how-to-share-a-dictionary-between-multiple-processes-in-python-without-locking However I have several doubts regarding the program given in the answer: The issue is that the following step isn't atomic d['blah'] += 1 Even with locking the answer provided in that question would lead to random results. Since Process 1 read value of d['blah'] saves it on the stack increments it and again writes it. In Between a Process 2 can read the d['blah'] as well. Locking means that while d['blah'] is being written or read no other process can access it. Can someone clarify my doubts?

    Read the article

  • Can you have a staging and production slot in Azure Websites

    - by Barry King
    I'm looking at hosting 3 Websites (there will all use the same linked database resource but I think I have to use 3 websites within Azure for this); www.website.com, provider.website.com and admin.website.com. Using Windows Azure Websites, can you have a Staging, Production slot? I think this feature is only available to Azure Cloud Services but there is little documentation on this. If its not possible, other than spinning up 3 more sites to act as the staging sites is there another way? I want the ability to "swap" from staging to production.

    Read the article

  • AutoMapper determine what to map based on generic type

    - by Daz Lewis
    Hi, Is there a way to provide AutoMapper with just a source and based on the specified mapping for the type of that source automatically determine what to map to? So for example I have a type of Foo and I always want it mapped to Bar but at runtime my code can receive any one of a number of generic types. public T Add(T entity) { //List of mappings var mapList = new Dictionary<Type, Type> { {typeof (Foo), typeof (Bar)} {typeof (Widget), typeof (Sprocket)} }; //Based on the type of T determine what we map to...somehow! var t = mapList[entity.GetType()]; //What goes in ?? to ensure var in the case of Foo will be a Bar? var destination = AutoMapper.Mapper.Map<T, ??>(entity); } Any help is much appreciated.

    Read the article

  • Rsync fails for files that start with underscore when destination is zfs

    - by Eric
    everyone. I'm using rsync3.1.0pre1 on Mac OS X 10.8.5, and am trying to rsync one folder to another. The destination is a ZFS volume mounted via SMB. The problem I'm having is that files that start with underscore (e.g., '_filename.jpg') are not being successfully synced to the destination. I get the following error message: rsync: mkstemp "/path/to/destination/._filename.jpg.NUgYJw" failed: Permission denied (13) In this case, '_filename.jpg' does not make it to the destination. I understand that rsync creates hidden, temporary files at the destination which are preceded with '.' and have a random file extension appended on the end. But the original filename starts with '', not '.', and I haven't asked rsync to copy extended attributes / resource forks over (unless it always does it). The rsync command I'm using is: rsync -avE --exclude='.DS_Store' --exclude '.Trash' --exclude 'Thumbs.db' --exclude '._*' --delete /source/ /destination/ Has anyone found a way around this problem? Thank you!

    Read the article

  • What is faster: multiple `send`s or using buffering?

    - by dauerbaustelle
    I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-)

    Read the article

  • Balanced Search Tree Query, Asymtotic Analysis..

    - by AGeek
    Hi, The situation is as follows:- We have n number and we have print them in sorted order. We have access to balanced dictionary data structure, which supports the operations serach, insert, delete, minimum, maximum each in O(log n) time. We want to retrieve the numbers in sorted order in O(n log n) time using only the insert and in-order traversal. The answer to this is:- Sort() initialize(t) while(not EOF) read(x) insert(x,t); Traverse(t); Now the query is if we read the elements in time "n" and then traverse the elements in "log n"(in-order traversal) time,, then the total time for this algorithm (n+logn)time, according to me.. Please explain the follow up of this algorithm for the time calculation.. How it will sort the list in O(nlogn) time?? Thanks.

    Read the article

  • How to move a windows machine properly from RAID 1 to raid 10? [migrated]

    - by goober
    Goal I would like to add two more hard drives to my current RAID 1 setup and create a RAID 0 setup on top of the two RAID 1 setups (which I believe is referred to as "RAID 10"). Components Involved Intel P68 Chipset Motherboard 4 SATA ports that can be configured for Raid An intel SSD cache that sits in front of the RAID, and a 64 GB SSD configured in that manner Two 1TB HDDs configured in RAID 1 OS: Windows 7 Professional Resources Consulted so far I found a great resource on LinuxQuestions.org for a good "best practices" process for Linux machines, but I'd like to develop a similar process that I know works on Windows Machines.

    Read the article

  • Find value within a range in lookup table

    - by francis
    I have the simplest problem to implement, but so far I have not been able to get my head around a solution in Python. I have built a table that looks similar to this one: 501 - ASIA 1262 - EUROPE 3389 - LATAM 5409 - US I will test a certain value to see if it falls within these ranges, 389 -> ASIA, 1300 -> LATAM, 5400 -> US. A value greater than 5409 should not return a lookup value. I normally have a one to one match, and would implement a dictionary for the lookup. But in this case I have to consider these ranges, and I am not seeing my way out of the problem. Maybe without providing the whole solution, could you provide some comments that would help me look in the right direction? It is very similar to a vlookup in a spreadsheet. I would describe my Python knowledge as somewhere in between basic to intermediate. Many thanks in advance.

    Read the article

  • Python - Is there a better/efficient way to find a node in tree?

    - by Sej P
    I have a node data structure defined as below and was not sure the find_matching_node method is pythonic or efficient. I am not well versed with generators but think there might be better solution using them. Any ideas? class HierarchyNode(): def __init__(self, nodeId): self.nodeId = nodeId self.children = {} # opted for dictionary to help reduce lookup time def addOrGetChild(self, childNode): return self.children.setdefault(childNode.nodeId,childNode) def find_matching_node(self, node): ''' look for the node in the immediate children of the current node. if not found recursively look for it in the children nodes until gone through all nodes ''' matching_node = self.children.get(node.nodeId) if matching_node: return matching_node else: for child in self.children.itervalues(): matching_node = child.find_matching_node(node) if matching_node: return matching_node return None

    Read the article

  • In Node.js, how can I load my modules once and then use them throughout the app?

    - by TIMEX
    I want to create one global module file, and then have all my files require that global module file. Inside that file, I would load all the modules once and export a dictionary of loaded modules. How can I do that? I actually tried creating this file...and every time I require('global_modules'), all the modules kept reloading. It's O(n). I want the file to be something like this (but it doesn't work): //global_modules.js - only load these one time var modules = { account_controller: '/account/controller.js', account_middleware: '/account/middleware.js', products_controller: '/products/controller.js', ... } exports.modules = modules;

    Read the article

  • Graphing/charting of CPU utilisation [on hold]

    - by Peter
    So nagios can be good at graphing particular resource utilisation or other metrics, but I'm looking for a tool that can output a chart or other graphical representation of how much CPU time/CPU utilisation /all/ services on a server are currently consuming. I think New Relic could probably achieve this to an extent, but I was wondering if there was a popular open source app used for this. In case I am explaining this in a bad way, my actual problem is that I have a shared server with suexec enabled (ie. httpd cgi running under multiple user accounts). I'd like to know which users are using the most CPU during periods of the day.

    Read the article

  • NSSortdescriptor, finding the path to the key I wish to use for sorting.

    - by RickiG
    Hi I am using an NSSortdescriptor to sort a collection of NSArrays, I then came across a case where the particular NSArray to be sorted contains an NSDictionary who contains an NSDictionary. I would like to sort from the string paired with a key in the last dictionary. This is how I would reference the string: NSDictionary *productDict = [MyArray objectAtIndex:index]; NSString *dealerName = [[productDict objectForKey:@"dealer"] objectForKey:@"name"]; How would I use the dealerName in my NSSortdescriptor to sort the array? NSSortDescriptor * sortDesc = [[NSSortDescriptor alloc] initWithKey:/* ? */ ascending:YES]; sortedDealerArray = [value sortedArrayUsingDescriptors:sortDesc]; [sortDesc release]; Hope someone could help me a bit with how I go about sorting according to keys inside objects inside other objects:) Thank you.

    Read the article

  • Puppet:get real-time status of catalog evaluation and post to remote server

    - by txworking
    According to this article http://docs.puppetlabs.com/guides/puppet_internals.html There are four phases when puppet agent got a catalog from master. resource generation = relationships = evaluation = reporting Reporting As the transaction progresses, it collects logs and metrics on what it does. At the end of evaluation, it turns this information into a report, which it sends to the server (if requested). And at the end of evaluation puppet agent would generate a report and sent the report to the master. Is there a way to get real-time status of evaluation phase and post them to a remote logcollector? Glad for any suggestions.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >