Search Results

Search found 4955 results on 199 pages for 'range'.

Page 169/199 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • LDAP query using Python: always no result

    - by Grey
    I am trying to use python to query LDAP server, and it always returns me no result. and anyone help me find what wrong with my python code? it runs fine without excpetions, and it always has no result. i played around with the filter like "cn=partofmyname" but just no luck. thanks for help import ldap try: l = ldap.open("server") l.protocol_version = ldap.VERSION3 l.set_option(ldap.OPT_REFERRALS, 0) output =l.simple_bind("cn=username,cn=Users,dc=domian, dc=net",'password$R') print output except ldap.LDAPError, e: print e baseDN = "DC=rim,DC=net" searchScope = ldap.SCOPE_SUBTREE ## retrieve all attributes - again adjust to your needs - see documentation for more options retrieveAttributes = None Filter = "(&(objectClass=user)(sAMAccountName=myaccount))" try: ldap_result_id = l.search(baseDN, searchScope, Filter, retrieveAttributes) print ldap_result_id result_set = [] while 1: result_type, result_data = l.result(ldap_result_id, 0) if len(result_data) == 0: print 'no reslut' break else: for i in range(len(result_set)): for entry in result_set[i]: try: name = entry[1]['cn'][0] email = entry[1]['mail'][0] phone = entry[1]['telephonenumber'][0] desc = entry[1]['description'][0] count = count + 1 print "%d.\nName: %s\nDescription: %s\nE-mail: %s\nPhone: %s\n" %\ (count, name, desc, email, phone) except: pass ## here you don't have to append to a list ## you could do whatever you want with the individual entry #if result_type == ldap.RES_SEARCH_ENTRY: # result_set.append(result_data) # print result_set except ldap.LDAPError, e: print e l.unbind()

    Read the article

  • Pymedia video encoding failed

    - by user1474837
    I am using Python 2.5 with Windows XP. I am trying to make a list of pygame images into a video file using this function. I found the function on the internet and edited it. It worked at first, than it stopped working. This is what it printed out: Making video... Formating 114 Frames... starting loop making encoder Frame 1 process 1 Frame 1 process 2 Frame 1 process 2.5 This is the error: Traceback (most recent call last): File "ScreenCapture.py", line 202, in <module> makeVideoUpdated(record_files, video_file) File "ScreenCapture.py", line 151, in makeVideoUpdated d = enc.encode(da) pymedia.video.vcodec.VCodecError: Failed to encode frame( error code is 0 ) This is my code: def makeVideoUpdated(files, outFile, outCodec='mpeg1video', info1=0.1): fw = open(outFile, 'wb') if (fw == None) : print "Cannot open file " + outFile return if outCodec == 'mpeg1video' : bitrate= 2700000 else: bitrate= 9800000 start = time.time() enc = None frame = 1 print "Formating "+str(len(files))+" Frames..." print "starting loop" for img in files: if enc == None: print "making encoder" params= {'type': 0, 'gop_size': 12, 'frame_rate_base': 125, 'max_b_frames': 90, 'height': img.get_height(), 'width': img.get_width(), 'frame_rate': 90, 'deinterlace': 0, 'bitrate': bitrate, 'id': vcodec.getCodecID(outCodec) } enc = vcodec.Encoder(params) # Create VFrame print "Frame "+str(frame)+" process 1" bmpFrame= vcodec.VFrame(vcodec.formats.PIX_FMT_RGB24, img.get_size(), # Covert image to 24bit RGB (pygame.image.tostring(img, "RGB"), None, None) ) print "Frame "+str(frame)+" process 2" # Convert to YUV, then codec da = bmpFrame.convert(vcodec.formats.PIX_FMT_YUV420P) print "Frame "+str(frame)+" process 2.5" d = enc.encode(da) #THIS IS WHERE IT STOPS print "Frame "+str(frame)+" process 3" fw.write(d.data) print "Frame "+str(frame)+" process 4" frame += 1 print "savng file" fw.close() Could somebody tell me why I have this error and possibly how to fix it? The files argument is a list of pygame images, outFile is a path, outCodec is default, and info1 is not used anymore. UPDATE 1 This is the code I used to make that list of pygame images. from PIL import ImageGrab import time, pygame pygame.init() f = [] #This is the list that contains the images fps = 1 for n in range(1, 100): info = ImageGrab.grab() size = info.size mode = info.mode data = info.tostring() info = pygame.image.fromstring(data, size, mode) f.append(info) time.sleep(fps)

    Read the article

  • Good way to allow people to select a lot of things?

    - by jphenow
    I'm using jQuery, ASP.NET, SQL Server, and the other usual suspects to design a company CRM. After they put in contact info, notes, dates, places and so forth they have to be able to select many different people to be "CC'ed." A group of people will be required to be one either "CC'ed" or "ToDo." The rest of the people can be nothing or "CC" or "ToDo." Currently we have it set up as a huge databind to templates with radio buttons for each option. Looks like shit. Anyone have any suggestions? I'd like to use a template with a datasource and have a good way to retrieve their answers and use them. I'm leaning jQuery direction but like I said I'll need there to be up to 3 possible options for the people. This is going to be all opinion so I'm just looking for options. Just to re-clarify, this concept is similar to email but I don't want them to have to type anything in as it is a set group of names that they're allowed to select from. Looking for quick simple and pretty. somewhere in the range of 120 names.

    Read the article

  • Trying to output a list using class

    - by captain morgan
    Am trying to get the moving average of a price..but i keep getting an attribute error in my Moving_Average class. ('Moving_Average' object has no attribute 'days'). Here is what I have: class Moving_Average: def calculation(self, alist:list,days:int): m = self.days prices = alist[1::2] average = [0]* len(prices) signal = ['']* len(prices) for m in range(0,len(prices)-days+1): average[m+2] = sum(prices[m:m+days])/days if prices[m+2] < average[m+2]: signal[m+2]='SELL' elif prices[m+2] > average[m+2] and prices[m+1] < average[m+1]: signal[m+2]='BUY' else: signal[m+2] ='' return average,signal def print_report(symbol:str,strategy:str): print('SYMBOL: ', symbol) print('STRATEGY: ', strategy) print('Date Closing Strategy Signal') def user(): strategy = ''' Which of the following strategy would you like to use? * Simple Moving Average [S] * Directional Indicator[D] Please enter your choice: ''' if signal_strategy in 'Ss': days = input('Please enter the number of days for the average') days = int(days) strategy = 'Simple Moving Average {}-days'.format(str(days)) m = Moving_Average() ma = m.calculation(gg, days) print(ma) gg is an list that contains date and prices. [2013-10-01,60,2013-10-02,60] The output is supposed to look like: Date Price Average Signal 2013-10-01 60.0 2013-10-02 60.0 60.00 BUY

    Read the article

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

  • Putting together CSV Cells in Excel with a macro

    - by Eric Kinch
    So, I have a macro to export data into CSV format and it's working great (Code at the bottom). The problem is the data I am putting into it. When I put the data in question in it comes out Firstname,Lastname,username,password,description I'd like to change it so I get Firstname Lastname,Firstname,Lastname,username,password,description What I'd like to do is manipulate my existing macro so to accomplish this. I'm not so good at VBS so any input or a shove in the right direction would be fantastic. Thanks! Sub Make_CSV() Dim sFile As String Dim sPath As String Dim sLine As String Dim r As Integer Dim c As Integer r = 1 'Starting row of data sPath = "C:\CSVout\" sFile = "MyText_" & Format(Now, "YYYYMMDD_HHMMSS") & ".CSV" Close #1 Open sPath & sFile For Output As #1 Do Until IsEmpty(Range("A" & r)) 'You can also Do Until r = 17 (get the first 16 cells) sLine = "" c = 1 Do Until IsEmpty(Cells(1, c)) 'Number of Columns - You could use a FOR / NEXT loop instead sLine = sLine & """" & Replace(Cells(r, c), ";", ":") & """" & "," c = c + 1 Loop Print #1, Left(sLine, Len(sLine) - 1) 'Remove the trailing comma r = r + 1 Loop Close #1 End Sub

    Read the article

  • mysql: storing arbitrary data

    - by Hailwood
    Background: I was asking a question on stack overflow regarding creating tables on the fly where this conversation ensued: This smells like a terrible idea! In fact, it smells just like this one. What in the world do you want to use this for? – deceze @deceze: very true, However, How else would you store the contents of these CSV files. They must be stored in mysql for indexing. The only solid fact about them is that they all have a mobile column with a standard format. The CSV can have an arbitrary amount of columns with an arbitrary amount of rows. They can (with no exaggeration) range from a single row, 35 column csv to an 80k row single column CSV. I am open to other ideas. – Hailwood There are many solutions for this, from attribute-value schemas to JSON storage and NoSQL storage. Open a new question about it. Whatever you do though, don't dynamically create tables! – deceze Question: So my question is, What would you say is the best way to store this data? Are you in agreement with deceze about not creating dynamic tables?

    Read the article

  • What is the difference between Inversion of Control and Dependency injection in C++?

    - by rlbond
    I've been reading recently about DI and IoC in C++. I am a little confused (even after reading related questions here on SO) and was hoping for some clarification. It seems to me that being familiar with the STL and Boost leads to use of dependency injection quite a bit. For example, let's say I made a function that found the mean of a range of numbers: template <typename Iter> double mean(Iter first, Iter last) { double sum = 0; size_t number = 0; while (first != last) { sum += *(first++); ++number; } return sum/number; }; Is this dependency injection? Inversion of control? Neither? Let's look at another example. We have a class: class Dice { public: typedef boost::mt19937 Engine; Dice(int num_dice, Engine& rng) : n_(num_dice), eng_(rng) {} int roll() { int sum = 0; for (int i = 0; i < num_dice; ++i) sum += boost::uniform_int<>(1,6)(eng_); return sum; } private: Engine& eng_; int n_; }; This seems like dependency injection. But is it inversion of control? Also, if I'm missing something, can someone help me out?

    Read the article

  • Delphi: How to avoid EIntOverflow underflow when subtracting?

    - by Ian Boyd
    Microsoft already says, in the documentation for GetTickCount, that you could never compare tick counts to check if an interval has passed. e.g.: Incorrect (pseudo-code): DWORD endTime = GetTickCount + 10000; //10 s from now ... if (GetTickCount > endTime) break; The above code is bad because it is suceptable to rollover of the tick counter. For example, assume that the clock is near the end of it's range: endTime = 0xfffffe00 + 10000 = 0x00002510; //9,488 decimal Then you perform your check: if (GetTickCount > endTime) Which is satisfied immediatly, since GetTickCount is larger than endTime: if (0xfffffe01 > 0x00002510) The solution Instead you should always subtract the two time intervals: DWORD startTime = GetTickCount; ... if (GetTickCount - startTime) > 10000 //if it's been 10 seconds break; Looking at the same math: if (GetTickCount - startTime) > 10000 if (0xfffffe01 - 0xfffffe00) > 10000 if (1 > 10000) Which is all well and good in C/C++, where the compiler behaves a certain way. But what about Delphi? But when i perform the same math in Delphi, with overflow checking on ({Q+}, {$OVERFLOWCHECKS ON}), the subtraction of the two tick counts generates an EIntOverflow exception when the TickCount rolls over: if (0x00000100 - 0xffffff00) > 10000 0x00000100 - 0xffffff00 = 0x00000200 What is the intended solution for this problem? Edit: i've tried to temporarily turn off OVERFLOWCHECKS: {$OVERFLOWCHECKS OFF}] delta = GetTickCount - startTime; {$OVERFLOWCHECKS ON} But the subtraction still throws an EIntOverflow exception. Is there a better solution, involving casts and larger intermediate variable types?

    Read the article

  • 500 internal server error at form connection

    - by klox
    hi..all..i've a problem i can't connect to database what's wrong with my code?this is my code: $("#mod").change(function() { var barcode; barCode=$("#mod").val(); var data=barCode.split(" "); $("#mod").val(data[0]); $("#seri").val(data[1]); var str=data[0]; var matches=str.match(/(EE|[EJU]).*(D)/i); $.ajax({ type:"post", url:"process1.php", data:"value="+matches+"action=tunermatches", cache:false, async:false, success: function(res){ $('#rslt').replaceWith( "<div id='value'><h6>Tuner range is" + res + " .</h6></div>" ); } }); }); and this is my process file: switch(postVar('action')) { case 'tunermatches' : tunermatches(postVar('tuner')); break; function tunermatches($tuner)){ $Tuner=mysql_real_escape_string($tuner); $sql= "SELECT remark FROM settingdata WHERE itemname="Tuner_range" AND itemdata="$Tunermatches"; $res=mysql_query($sql); $dat=mysql_fetch_array($res,MYSQL_NUM); if($dat[0]>0) { echo $dat[0]; } mysql_close($dbc); }

    Read the article

  • When is the reintegrate option really necessary?

    - by Tor Hovland
    If you always sync a feature branch before you merge it back, why do you really have to use the --reintegrate option? The Subversion book says: When merging your branch back to the trunk, however, the underlying mathematics is quite different. Your feature branch is now a mishmosh of both duplicated trunk changes and private branch changes, so there's no simple contiguous range of revisions to copy over. By specifying the --reintegrate option, you're asking Subversion to carefully replicate only those changes unique to your branch. (And in fact, it does this by comparing the latest trunk tree with the latest branch tree: the resulting difference is exactly your branch changes!) So the --reintegrate option only merges the changes that are unique to the feature branch. But if you always sync before merge (which is a recommended practice, in order to deal with any conflicts on the feature branch), then the only changes between the branches are the changes that are unique to the feature branch, right? And if Subversion tries to merge code that is already on the target branch, it will just do nothing, right? In this blog post, Mark Phippard writes: http://blogs.open.collab.net/svn/2008/07/subversion-merg.html If we include those synched revisions, then we merge back changes that already exist in trunk. This yields unnecessary and confusing conflicts. Can somebody give me an example of when dropping reintegrate gives me unnecessary conflicts?

    Read the article

  • django join-like expansion of queryset

    - by jimbob
    I have a list of Persons each which have multiple fields that I usually filter what's upon, using the object_list generic view. Each person can have multiple Comments attached to them, each with a datetime and a text string. What I ultimately want to do is have the option to filter comments based on dates. class Person(models.Model): name = models.CharField("Name", max_length=30) ## has ~30 other fields, usually filtered on as well class Comment(models.Model): date = models.DateTimeField() person = models.ForeignKey(Person) comment = models.TextField("Comment Text", max_length=1023) What I want to do is get a queryset like Person.objects.filter(comment__date__gt=date(2011,1,1)).order_by('comment__date') send that queryset to object_list and be able to only see the comments ordered by date with only so many objects on a page. E.g., if "Person A" has comments 12/3/11, 1/2/11, 1/5/11, "Person B" has no comments, and person C has a comment on 1/3, I would see: "Person A", 1/2 - comment "Person C", 1/3 - comment "Person A", 1/5 - comment I would strongly prefer not to have to switch to filtering based on Comments.objects.filter(), as that would make me have to largely repeat large sections of code in the both the view and template. Right now if I tried executing the following command, I will get a queryset returning (PersonA, PersonC, PersonA), but if I try rendering that in a template each persons comment_set will contain all their comments even if they aren't in the date range. Ideally they're would be some sort of functionality where I could expand out a Person queryset's comment_set into a larger queryset that can be sorted and ordered based on the comment and put into a object_list generic view. This normally is fairly simple to do in SQL with a JOIN, but I don't want to abandon the ORM, which I use everywhere else.

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • convert an int to list of individual digitals more faster?

    - by user478514
    All, I want define an int(987654321) <= [9, 8, 7, 6, 5, 4, 3, 2, 1] convertor, if the length of int number < 9, for example 10 the list will be [0,0,0,0,0,0,0,1,0] , and if the length 9, for example 9987654321 , the list will be [9, 9, 8, 7, 6, 5, 4, 3, 2, 1] >>> i 987654321 >>> l [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> z = [0]*(len(unit) - len(str(l))) >>> z.extend(l) >>> l = z >>> unit [100000000, 10000000, 1000000, 100000, 10000, 1000, 100, 10, 1] >>> sum([x*y for x,y in zip(l, unit)]) 987654321 >>> int("".join([str(x) for x in l])) 987654321 >>> l1 = [int(x) for x in str(i)] >>> z = [0]*(len(unit) - len(str(l1))) >>> z.extend(l1) >>> l1 = z >>> l1 [9, 8, 7, 6, 5, 4, 3, 2, 1] >>> a = [i//x for x in unit] >>> b = [a[x] - a[x-1]*10 for x in range(9)] >>> if len(b) = len(a): b[0] = a[0] # fix the a[-1] issue >>> b [9, 8, 7, 6, 5, 4, 3, 2, 1] I tested above solutions but found those may not faster/simple enough than I want and may have a length related bug inside, anyone may share me a better solution for this kinds convertion? Thanks!

    Read the article

  • Which class should store the lookup table?

    - by max
    The world contains agents at different locations, with only a single agent at any location. Each agent knows where he's at, but I also need to quickly check if there's an agent at a given location. Hence, I also maintain a map from locations to agents. I have a problem deciding where this map belongs to: class World, class Agent (as a class attribute) or elsewhere. In the following I put the lookup table, agent_locations, in class World. But now agents have to call world.update_agent_location every time they move. This is very annoying; what if I decide later to track other things about the agents, apart from their locations - would I need to add calls back to the world object all across the Agent code? class World: def __init__(self, n_agents): # ... self.agents = {} self.agent_locations = {} for id in range(n_agents): x, y = self.find_location() agent = Agent(self,x,y) self.agents.append(agent) self.agent_locations[x,y] = agent def update_agent_location(self, agent, x, y): del self.agent_locations[agent.x, agent.y] self.agent_locations[x, y] = agent def update(self): # next step in the simulation for agent in self.agents: agent.update() # next step for this agent # ... class Agent: def __init__(self, world, x, y): self.world = world self.x, self.y = x, y def move(self, x1, y1): self.world.update_agent_location(self, x1, y1) self.x, self.y = x1, y1 def update(): # find a good location that is not occupied and move there for x, y in self.valid_locations(): if not self.location_is_good(x, y): continue if self.world.agent_locations[x, y]: # location occupied continue self.move(x, y) I can instead put agent_locations in class Agent as a class attribute. But that only works when I have a single World object. If I later decide to instantiate multiple World objects, the lookup tables would need to be world-specific. I am sure there's a better solution... EDIT: I added a few lines to the code to show how agent_locations is used. Note that it's only used from inside Agent objects, but I don't know if that would remain the case forever.

    Read the article

  • Using the Queue class in Python 2.6

    - by voipme
    Let's assume I'm stuck using Python 2.6, and can't upgrade (even if that would help). I've written a program that uses the Queue class. My producer is a simple directory listing. My consumer threads pull a file from the queue, and do stuff with it. If the file has already been processed, I skip it. The processed list is generated before all of the threads are started, so it isn't empty. Here's some pseudo-code. import Queue, sys, threading processed = [] def consumer(): while True: file = dirlist.get(block=True) if file in processed: print "Ignoring %s" % file else: # do stuff here dirlist.task_done() dirlist = Queue.Queue() for f in os.listdir("/some/dir"): dirlist.put(f) max_threads = 8 for i in range(max_threads): thr = Thread(target=consumer) thr.start() dirlist.join() The strange behavior I'm getting is that if a thread encounters a file that's already been processed, the thread stalls out and waits until the entire program ends. I've done a little bit of testing, and the first 7 threads (assuming 8 is the max) stop, while the 8th thread keeps processing, one file at a time. But, by doing that, I'm losing the entire reason for threading the application. Am I doing something wrong, or is this the expected behavior of the Queue/threading classes in Python 2.6?

    Read the article

  • Is there a definitive reference document for Ruby syntax?

    - by JSW
    I'm searching for a definitive document on Ruby syntax. I know about the definitive documents for the core API and standard library, but what about the syntax itself? For instance, such a document should cover: reserved words, string literals syntax, naming rules for variables/classes/modules, all the conditional statements and their permutations, and so forth. I know there are many books and tutorials, yes, but every one of them is essentially a tutorial, each one having a range of different depth and focus. They will all, by necessity of brevity and narrative flow, omit certain details of the language that the author deems insignificant. For instance, did you know that you can use a case statement without an initial case value, and it will then execute the first true when clause? Any given Ruby book or tutorial may or may not cover that particular lesser-known functionality of the case syntax. It's not discussed in the section in "Programming Ruby" about case statements. But that is just one small example. So far the best documentation I've found is the rubyspec project, which appears to be an attempt to write a complete test suite for the language. That's not bad, but it's a bit hard to use from a practical standpoint as a developer working on my own projects. Am I just missing something or is there really no definitive readable document defining the whole of Ruby syntax?

    Read the article

  • scheme basic loop

    - by utku
    I'm trying to write a scheme func that behaves in a way similar to a loop. (loop min max func) This loop should perform the func between the range min and max (integers) -- one of an example like this (loop 3 6 (lambda (x) (display (* x x)) (newline))) 9 16 25 36 and I define the function as ( define ( loop min max fn) (cond ((>= max min) ( ( fn min ) ( loop (+ min 1 ) max fn) ) ) ) ) when I run the code I get the result then an error occur. I couldn't handle this error. (loop 3 6 (lambda (x) (display(* x x))(newline))) 9 16 25 36 Backtrace: In standard input: 41: 0* [loop 3 6 #] In utku1.scheme: 9: 1 (cond ((= max min) ((fn min) (loop # max fn)))) 10: 2 [# ... 10: 3* [loop 4 6 #] 9: 4 (cond ((= max min) ((fn min) (loop # max fn)))) 10: 5 [# ... 10: 6* [loop 5 6 #] 9: 7 (cond ((= max min) ((fn min) (loop # max fn)))) 10: 8 [# ... 10: 9* [loop 6 6 #] 9: 10 (cond ((= max min) ((fn min) (loop # max fn)))) 10: 11 [# #] utku1.scheme:10:31: In expression ((fn min) (loop # max ...)): utku1.scheme:10:31: Wrong type to apply: #<unspecified> ABORT: (misc-error)

    Read the article

  • Reporting Services "cannot connect to the report server database"

    - by Dano
    We have Reporting Services running, and twice in the past 6 months it has been down for 1-3 days, and suddenly it will start working again. The errors range from not being able to view the tree root in a browser, down to being able to insert parameters on a report, but crashing before the report can generate. Looking at the logs, there is 1 error and 1 warning which seem to correspond somewhat. ERROR:Event Type: Error Event Source: Report Server (SQL2K5) Event Category: Management Event ID: 107 Date: 2/13/2009 Time: 11:17:19 AM User: N/A Computer: ******** Description: Report Server (SQL2K5) cannot connect to the report server database. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. WARNING: always comes before the previous error Event code: 3005 Event message: An unhandled exception has occurred. Event time: 2/13/2009 11:06:48 AM Event time (UTC): 2/13/2009 5:06:48 PM Event ID: 2efdff9e05b14f4fb8dda5ebf16d6772 Event sequence: 550 Event occurrence: 5 Event detail code: 0 Process information: Process ID: 5368 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: ReportServerException Exception message: For more information about this error navigate to the report server on the local server machine, or enable remote errors. During the downtime we tried restarting everything from the server RS runs on, to the database it calls to fill reports with no success. When I came in monday morning it was working again. Anyone out there have any ideas on what could be causing these issues? Edit Tried both suggestions below several months ago to no avail. This issue hasn't arisen since, maybe something out of my control has changed....

    Read the article

  • Can isdigit legitimately be locale dependent in C

    - by cdev
    In the section covering setlocale, the ANSI C standard states in a footnote that the only ctype.h functions whose behaviour is not affected by the current locale are isdigit and isxdigit. The Microsoft implementation of isdigit is locale dependent because, for example, in locales using code page 1250 isdigit only returns non-zero for characters in the range 0x30 ('0') - 0x39 ('9'), whereas in locales using code page 1252 isdigit also returns non-zero for the superscript digits 0xB2 ('²'), 0xB3 ('³') and 0xB9 ('¹'). Is Microsoft in violation of the C standard by making isdigit locale dependent? In this question I am primarily interested in C90, which Microsoft claims to conform to, rather than C99. Additional background: Microsoft's own documentation of setlocale incorrectly states that isdigit is unaffected by the LC_CTYPE part of the locale. The section of the C standard that covers the ctype.h functions contains some wording that I consider ambiguous: "The behavior of these functions is affected by the current locale. Those functions that have locale-specific aspects only when not in the "C" locale are noted below." I consider this ambiguous because it is unclear what it is trying to say about functions such as isdigit for which there are no notes about locale-specific aspects. It might be trying to say that such functions must be assumed to be locale dependent, in which case Microsoft's implementation of isdigit would be OK. (Except that the footnote I mentioned earlier seems to contradict this interpretation.)

    Read the article

  • Setting up SVN (subvsersion) to manage our companies files, how to exclude large files from being ve

    - by Roeland
    Me and two other guys recently started our own web development company. We each work from our homes and have decided we want to keep one central location for all of our files. These files include word documents, spreadsheets, client files, designs.. etc. Anything pertaining to our company. I have a pretty solid internet connection and a windows 2008 server box sitting at home so I set up a subversion repository. Our file repository will look something like this. Clients Company A Design (photoshop files, wireframes, concepts) Documents ( logins, quotes, proposals etc) Site Backups Company B Design Documents Site Backups Prospects Company C Company D Our Company Our Website Documents (contract, operating procudres) My question is in regards to design files. The photoshop files that my designer works with range in sizes from 10mb to 100mb. I don't think we need to keep these files version-ed as this would eat up space incredibly fast. How do I go about controlling which files get version-ed, and which files are just stored. What I am thinking is that all documents need to be version-ed, and any files other then that should not be. Any help would be appreciated, thanks! Edit I am also curious whether this is the way to go. I just like this system since it keeps version of all my documents and at the same time. Also essentially I will have 3 backups in 3 different locations (3 local copies) so no need for backing it up. I am unsure of how svn would perform as purely a huge file repository.

    Read the article

  • Simplifying for-if messes with better structure?

    - by HH
    # Description: you are given a bitwise pattern and a string # you need to find the number of times the pattern matches in the string # any one liner or simple pythonic solution? import random def matchIt(yourString, yourPattern): """find the number of times yourPattern occurs in yourString""" count = 0 matchTimes = 0 # How can you simplify the for-if structures? for coin in yourString: #return to base if count == len(pattern): matchTimes = matchTimes + 1 count = 0 #special case to return to 2, there could be more this type of conditions #so this type of if-conditionals are screaming for a havoc if count == 2 and pattern[count] == 1: count = count - 1 #the work horse #it could be simpler by breaking the intial string of lenght 'l' #to blocks of pattern-length, the number of them is 'l - len(pattern)-1' if coin == pattern[count]: count=count+1 average = len(yourString)/matchTimes return [average, matchTimes] # Generates the list myString =[] for x in range(10000): myString= myString + [int(random.random()*2)] pattern = [1,0,0] result = matchIt(myString, pattern) print("The sample had "+str(result[1])+" matches and its size was "+str(len(myString))+".\n" + "So it took "+str(result[0])+" steps in average.\n" + "RESULT: "+str([a for a in "FAILURE" if result[0] != 8])) # Sample Output # # The sample had 1656 matches and its size was 10000. # So it took 6 steps in average. # RESULT: ['F', 'A', 'I', 'L', 'U', 'R', 'E']

    Read the article

  • Identifying the GeoPoint that trigger an onTap call

    - by Akroy
    I'm developing a Google Maps app on Android. I have a number of GeoPoints that I'm displaying by adding them as OverlayItems to an ItemizedOverlay. This works well for displaying them and bringing up a nice box when I click them, however I'm trying to put info in the box it brings up. Thus, I've extended ItemizedOverlay with my own class, and I'm overriding onTap (final GeoPoint p, final MapView mapView). At first I thought that this would be very simple, as one of the parameters is the GeoPoint, so I would know exactly which GeoPoint was clicked. However, from what I can tell, the GeoPoint argument there is the GeoPoint for where the user actually touched. Given the range the user can touch and still trigger the onTap, that GeoPoint isn't very helpful for knowing precisely which GeoPoint was actually touched. I'm currently checking the parameter GeoPoint against all my existing GeoPoints and seeing which it's closest to. This seems like a super hacky abstraction inversion. Is there a better way to know what was actually tapped?

    Read the article

  • Getting unhandled error and connection get lost when a client tries to communicate with chat server in twisted

    - by user2433888
    from twisted.internet.protocol import Protocol,Factory from twisted.internet import reactor class ChatServer(Protocol): def connectionMade(self): print "A Client Has Connected" self.factory.clients.append(self) print"clients are ",self.factory.clients self.transport.write('Hello,Welcome to the telnet chat to sign in type aim:YOUR NAME HERE to send a messsage type msg:YOURMESSAGE '+'\n') def connectionLost(self,reason): self.factory.clients.remove(self) self.transport.write('Somebody was disconnected from the server') def dataReceived(self,data): #print "data is",data a = data.split(':') if len(a) > 1: command = a[0] content = a[1] msg="" if command =="iam": self.name + "has joined" elif command == "msg": ma=sg = self.name + ":" +content print msg for c in self.factory.clients: c.message(msg) def message(self,message): self.transport.write(message + '\n') factory = Factory() factory.protocol = ChatServer factory.clients = [] reactor.listenTCP(80,factory) print "Iphone Chat server started" reactor.run() The above code is running succesfully...but when i connect the client (by typing telnet localhost 80) to this chatserver and try to write message ,connection gets lost and following errors occurs : Iphone Chat server started A Client Has Connected clients are [<__main__.ChatServer instance at 0x024AC0A8>] Unhandled Error Traceback (most recent call last): File "C:\Python27\lib\site-packages\twisted\python\log.py", line 84, in callWithLogger return callWithContext({"system": lp}, func, *args, **kw) File "C:\Python27\lib\site-packages\twisted\python\log.py", line 69, in callWithContext return context.call({ILogContext: newCtx}, func, *args, **kw) File "C:\Python27\lib\site-packages\twisted\python\context.py", line 118, in callWithContext return self.currentContext().callWithContext(ctx, func, *args, **kw) File "C:\Python27\lib\site-packages\twisted\python\context.py", line 81, in callWithContext return func(*args,**kw) --- --- File "C:\Python27\lib\site-packages\twisted\internet\selectreactor.py", line 150, in _doReadOrWrite why = getattr(selectable, method)() File "C:\Python27\lib\site-packages\twisted\internet\tcp.py", line 199, in doRead rval = self.protocol.dataReceived(data) File "D:\chatserverultimate.py", line 21, in dataReceived content = a[1] exceptions.IndexError: list index out of range Where am I going wrong?

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >