Search Results

Search found 4279 results on 172 pages for 'crystal reports 8 5'.

Page 146/172 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • Scripts to parse and download iTunes Connect and AppStore data

    - by bradhouse
    I'm looking for recommendations of a script or series of scripts that download and parse iTunes Connect sales data and AppStore comments, ratings and rankings data for a defined app. I'm also aware of solutions like: AppViz appsales-mobile iphone-stats Heartbeat.app I'm sure I'll find a few more with more searching. I can't help but feel there must be a really decent set of open source scripts out there to do this, given how many developers are now writing apps for the AppStore. Would be interested to hear any commercial offerings as well (although my personal preference is for open source, so I can at least see what it is doing with my iTunes Connect login credentials). To be clear, I'm really looking for something that hits all of the areas mentioned: App Store (per store) Comments Ratings Category/store rankings iTunes Connect The contents of the sales reports Analysis/graphs of the data is not necessary (but would be a nice to have I guess). I'm not really looking for something like AppSales Mobile above, I would like the raw data so I can do my own analysis and formatting. So far it looks like AppViz (listed above) is the best out there. Any suggestions on what is good/available or should I just go roll my own?

    Read the article

  • Field specific errors for ETL

    - by AaronLS
    I am creating a ETL process in MS SQL Server and I would like to have errors specific to a particular column of a particular row. For example, the data is initially loaded from excel files into a table(we'll call the Initial table) where all columns are varchar(2000) and then I stage the data to another table(the DataTypedTable) that contains more specific data types (datetime,int, etc.) or more tightly constrained varchar lengths. I need to be able to create error messages for a specific field such as: "Jan. 13th" is not a valid date format for the submission date. Please use a format of MM/DD/YYYY These error messages would need to be stored in some way such that later in the process a automated process can create reports with the error messages such that each message references a specific row and field(someone will need to go back and correct the data in the source system and resubmit the excel file). So ideally it would be inserted into a Failures tables of some sort and contain the primary key of the failed row, the column name, and the error message. Question: So I am wondering if this can be accomplished with SSIS, or some open source tool like Talend, and if so, what would be your general approach? Or what hand coded approach you would take? Couple approaches I've thought of using SQL(up until no I have done ETL by hand in SQL procs, but I want to consider other approaches. Possible C# even.): Use a cursor to read through the Initial table, and for each row insert a blank record with only the primary key into the DataTyped table, then use a single update statement for each column, such that if that update fails I can insert a very specific error message specific to that column in the error messages table. Insert all the data as is into the DataTyped table, but have duplicate columns like SubmissionDate and SubmissionDateOld. After the initial insert the *Old columns have data, the rest are blank, and I have a single update for each column that sets the SubmissionDate based on the SubmissionDateOld. In addition to suggesting an approach, I'd like to know if you are using that approach or something similar already in the work you do.

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • Free tools/libraries to compare tables with filtering for SQL Server 2008 and visualize/sync diffs t

    - by MicMit
    I am building certain GUI in C# for a content manager and looking for the tools or code snippets or open libraries ( code ideally in C# ) which allow me the following : 1. For table A in database X (test ) and table A in database Y (production) and for a simple filter ( e.g. listname = "XYZ" ) I need to show additions/deletions/updates in some way. which might be side-by-side or just html report 2 record added html table with some fields 2 record deleted html table with some fields Considering that this task is very common, I guess, certain components should exist ? Components either return some collections from parameters given for further visualizing or just produce reports mentioned above. 2. I need to push changes for the filter I mentioned in 1 and update table in production database for this filter only ( ie for the particular list approved by content person). Again probably there are certain SQL code generators - components in addition to diffs or standalone. 3. The key thing tools/libraries - should be suitable for integration with the existing application in C#.

    Read the article

  • SQL Server: Output an XML field as tabular data using a stored procedure

    - by Pawan
    I am using a table with an XML data field to store the audit trails of all other tables in the database. That means the same XML field has various XML information. For example my table has two records with XML data like this: 1st record: <client> <name>xyz</name> <ssn>432-54-4231</ssn> </client> 2nd record: <emp> <name>abc</name> <sal>5000</sal> </emp> These are the two sample formats and just two records. The table actually has many more XML formats in the same field and many records in each format. Now my problem is that upon query I need these XML formats to be converted into tabular result sets. What are the options for me? It would be a regular task to query this table and generate reports from it. I want to create a stored procedure to which I can pass that I need to query "<emp>" or "<client>", then my stored procedure should return tabular data.

    Read the article

  • iPhone developer cert not associating with Provisioning Profiles

    - by baudot
    I'm seeing the dreaded "Code Sign error: The identity 'iPhone Developer' doesn't match any valid certificate/private key pair in the default keychain" error. Strange, as it used to work. Not sure what changed. A few of the symptoms I've noticed beyond this: In the project info, for Code Signing Identity, instead of saying "iPhone Developer: My Name Here", it only says "iPhone Developer", followed by a list of grayed out Provisioning Profiles with the error message "profile doesn't match any valid certificate/private key pair in the keychain." In the organizer, if I click the "Developer Profile" sidebar entry, it shows one entry in the "Identities" pane, "iPhone Distribution: My Name Here". However, no profiles show in the Provisioning Profiles pane. In the organizer, if I click the "Provisioning Profiles" sidebar entry, for each of the profiles there it reports "A valid signing identity matching this profile could not be found in your keychain." I've tried a handful of the usual folk cures for this ailment, without success so far, such as: Cleared my old keypairs and expired developer identity cert out of the keychain. Deleted my old developer profile, created a new one, and regenerated the provisioning profile after. Reconfirmed: The App ID on the provisioning portal for this app is a pure wildcard ID. (The "Bundle Identifier" in the info.plist is just the appname, no reversed domain prefix.) Restored my iPhone. Overstalled the latest version of Xcode.

    Read the article

  • Accuracy of OpenGL ES Instrument

    - by Rob Jones
    I'm developing a game for the iPhone. I've decided that 30FPS is plenty so I've written some code that only allows the App to present the render buffer every 1/30 of a second. When I tried to verify this with Instruments I got varying information. On an iPod Touch (2009 edition, 32G) it reports 30 FPS for Core Animation Frames Per Second. On an iPhone 3G I get wildly varying results. And not just less than 30 FPS. I see 30 FPS on a regular basis. It actually seems to hang closer to 36-39. To investigate this anomaly I added my own FPS to the app and update it once per second. I stays right at 29 FPS on both devices. So, does anyone have any suggestions as to what might be going on? I expect Instruments to be accurate so it really concerns me that it appears inaccurate. It makes me think I have a bug somewhere, but I sure can't find it.

    Read the article

  • Delphi Shell IExtractIcon usage and result

    - by Roy M Klever
    What I do: Try to extract thumbnail using IExtractImage if that fail I try to extract icons using IExtractIcon, to get maximum iconsize, but IExtractIcon gives strange results. Problem is I tried to use a methode that extracts icons from an imagelist but if there is no large icon (256x256) it will render the smaller icon at the topleft position of the icon and that does not look good. That is why I am trying to use the IExtractIcon instead. But icons that show up as 256x256 icons in my imagelist extraction methode reports icon sizes as 33 large and 16 small. So how do I check if a large (256x256) icon exists? If you need more info I can provide som sample code. if PThumb.Image = nil then begin OleCheck(ShellFolder.ParseDisplayName(0, nil, StringToOleStr(PThumb.Name), Eaten, PIDL, Atribute)); ShellFolder.GetUIObjectOf(0, 1, PIDL, IExtractIcon, nil, XtractIcon); CoTaskMemFree(PIDL); bool:= False; if Assigned(XtractIcon) then begin GetLocationRes := XtractIcon.GetIconLocation(GIL_FORSHELL, @Buf, sizeof(Buf), IIdx, IFlags); if (GetLocationRes = NOERROR) or (GetLocationRes = E_PENDING) then begin Bmp := TBitmap.Create; try OleCheck(XtractIcon.Extract(@Buf, IIdx, LIcon, SIcon, 32 + (16 shl 16))); Done:= False; Roy M Klever

    Read the article

  • WINE error when running windows application

    - by Colen
    Hello, A user reports that one of our applications doesn't work under WINE. It runs until he proceeds past a certain form, and then freezes. WINE gives the following output: ~/.wine/drive_c/HeroLab$ wine HeroLab.exe fixme:wininet:InternetSetOptionW Option INTERNET_OPTION_CONNECT_TIMEOUT (30000): STUB fixme:wininet:InternetSetOptionW INTERNET_OPTION_SEND/RECEIVE_TIMEOUT 30000 fixme:wininet:InternetSetOptionW Option INTERNET_OPTION_CONNECT_TIMEOUT (30000): STUB fixme:wininet:InternetSetOptionW INTERNET_OPTION_SEND/RECEIVE_TIMEOUT 30000 fixme:wininet:InternetSetOptionW Option INTERNET_OPTION_CONNECT_TIMEOUT (30000): STUB fixme:wininet:HTTPREQ_QueryOption Semi-STUB INTERNET_OPTION_SECURITY_FLAGS: 0 fixme:wininet:InternetSetOptionW Option INTERNET_OPTION_SECURITY_FLAGS; STUB fixme:wininet:InternetSetOptionW Option INTERNET_OPTION_CONNECT_TIMEOUT (30000): STUB fixme:wininet:InternetSetOptionW INTERNET_OPTION_SEND/RECEIVE_TIMEOUT 30000 fixme:wininet:InternetSetOptionW Option INTERNET_OPTION_CONNECT_TIMEOUT (30000): STUB fixme:wininet:InternetSetOptionW INTERNET_OPTION_SEND/RECEIVE_TIMEOUT 30000 fixme:wininet:InternetSetOptionW Option INTERNET_OPTION_CONNECT_TIMEOUT (30000): STUB fixme:wininet:InternetSetOptionW Option INTERNET_OPTION_CONNECT_TIMEOUT (30000): STUB fixme:wininet:InternetSetOptionW INTERNET_OPTION_SEND/RECEIVE_TIMEOUT 30000 fixme:wininet:InternetSetOptionW Option INTERNET_OPTION_CONNECT_TIMEOUT (30000): STUB fixme:wininet:InternetSetOptionW INTERNET_OPTION_SEND/RECEIVE_TIMEOUT 30000 fixme:wininet:InternetSetOptionW Option INTERNET_OPTION_CONNECT_TIMEOUT (30000): STUB fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:bitmap:CreateBitmapIndirect planes = 0 fixme:bitmap:CreateBitmapIndirect planes = 0 Can anyone explain these messages to me? I assume they are messages from WINE reporting that we are calling functions in ways that WINE doesn't support, but our code doesn't call CreateBitmapIndirect at all. How can we locate the source of this problem, and fix our app so it runs under WINE? Thanks.

    Read the article

  • Eclipse+PyDev+GAE memcache error

    - by bocco
    I've started using Eclipe+PyDev as an environment for developing my first app for Google App Engine. Eclipse is configured according to this tutorial. Everything was working until I start to use memcache. PyDev reports the errors and I don't know how to fix it: Error: Undefined variable from import: get How to fix this? Sure, it is only PyDev checker problem. Code is correct and run on GAE. UPDATE: I'm using PyDev 1.5.0 but experienced the same with 1.4.8. My PYTHONPATH includes (set in Project Properties/PyDev - PYTHONPATH): C:\Program Files\Google\google_appengine C:\Program Files\Google\google_appengine\lib\django C:\Program Files\Google\google_appengine\lib\webob C:\Program Files\Google\google_appengine\lib\yaml\lib UPDATE 2: I took a look at C:\Program Files\Google\google_appengine\google\appengine\api\memcache\__init__.py and found get() is not declared as memcache module function. They use the following trick to do that (I didn't hear about such possibility): _CLIENT = None def setup_client(client_obj): """Sets the Client object instance to use for all module-level methods. Use this method if you want to have customer persistent_id() or persistent_load() functions associated with your client. Args: client_obj: Instance of the memcache.Client object. """ global _CLIENT var_dict = globals() _CLIENT = client_obj var_dict['set_servers'] = _CLIENT.set_servers var_dict['disconnect_all'] = _CLIENT.disconnect_all var_dict['forget_dead_hosts'] = _CLIENT.forget_dead_hosts var_dict['debuglog'] = _CLIENT.debuglog var_dict['get'] = _CLIENT.get var_dict['get_multi'] = _CLIENT.get_multi var_dict['set'] = _CLIENT.set var_dict['set_multi'] = _CLIENT.set_multi var_dict['add'] = _CLIENT.add var_dict['add_multi'] = _CLIENT.add_multi var_dict['replace'] = _CLIENT.replace var_dict['replace_multi'] = _CLIENT.replace_multi var_dict['delete'] = _CLIENT.delete var_dict['delete_multi'] = _CLIENT.delete_multi var_dict['incr'] = _CLIENT.incr var_dict['decr'] = _CLIENT.decr var_dict['flush_all'] = _CLIENT.flush_all var_dict['get_stats'] = _CLIENT.get_stats setup_client(Client()) Hmm... Any idea how to force PyDev to recognize that?

    Read the article

  • Importing fixtures with foreign keys and SQLAlchemy?

    - by Chris Reid
    I've been experimenting with using fixture to load test data sets into my Pylons / PostgreSQL app. This works great except that it fails to properly create foreign keys if they reference an auto increment id field. My fixture looks like this: class AuthorsData(DataSet): class frank_herbert: first_name = "Frank" last_name = "Herbert" class BooksData(DataSet): class dune: title = "Dune" author_id = AuthorsData.frank_herbert.ref('id') And the model: t_authors = sa.Table("authors", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("first_name", sa.types.String(100)), sa.Column("last_name", sa.types.String(100)), ) t_books = sa.Table("books", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("title", sa.types.String(100)), sa.Column("author_id", sa.types.Integer, sa.ForeignKey('authors.id')) ) When running "paster setup-app development.ini", SQLAlchemey reports the FK value as "None" so it's obviously not finding it: 15:59:48,683 INFO [sqlalchemy.engine.base.Engine.0x...9eb0] INSERT INTO books (title, author_id) VALUES (%(title)s, %(author_id)s) RETURNING books.id 15:59:48,683 INFO [sqlalchemy.engine.base.Engine.0x...9eb0] {'author_id': None, 'title': 'Dune'} The fixture docs actually warn that this might be a problem: "However, in some cases you may need to reference an attribute that does not have a value until it is loaded, like a serial ID column. (Note that this is not supported by the SQLAlchemy data layer when using sessions.)" http://farmdev.com/projects/fixture/using-dataset.html#referencing-foreign-dataset-classes Does this mean that this is just not supported with SQLAlchemy? Or is it possible to load the data without using SA "sessions"? How are other people handling this issue?

    Read the article

  • WPF abnormal CPU usage for animation

    - by 0xDEAD BEEF
    HI! I am developing WPF application and client reports extreamly high CPU usage (90%) (whereas i am unable to repeat that behavior). I have traced bootleneck down to these lines. It is simple glowing animation for small single led control (blinking led). What could be reason for this simple annimation taking up SO huge CPU resources? <Trigger Property="State"> <Trigger.Value> <local:BlinkingLedStatus>Blinking</local:BlinkingLedStatus> </Trigger.Value> <Trigger.EnterActions> <BeginStoryboard Name="beginStoryBoard"> <Storyboard> <DoubleAnimation Storyboard.TargetName="glow" Storyboard.TargetProperty="Opacity" AutoReverse="True" From="0.0" To="1.0" Duration="0:0:0.5" RepeatBehavior="Forever"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="beginStoryBoard"/> </Trigger.ExitActions> </Trigger>

    Read the article

  • Iterating Over Params Hash

    - by Joe Clark
    I'm having an extremely frustrating time getting some images to upload. They are obviously being uploaded as rack/multipart but the way that I'm iterating over my params hash must be causing the problem. I could REALLY use some help, so I can stop pulling out my hair. So I've got a params hash that looks like this: Parameters: {"commit"=>"Submit", "sighting_report"=>[{"number_seen"=>"1", "picture"=>#<File:/var/folders/IX/IXXrbzpCHkq68OuyY-yoI++++TI/-Tmp-/RackMultipart.85991.5>, "species_id"=>"2"}], "authenticity_token"=>"u0eN5MAfvGWtfEzrqBt4qfrL54VJ9SGX0jFLZCJ8iRM=", "sighting"=>{"sighting_date(2i)"=>"6", "name"=>"", "sighting_date(3i)"=>"5", "county"=>"0", "notes"=>"", "location"=>"", "sighting_date(1i)"=>"2010", "email"=>""}} My form can have multiple sighting reports with multiple pictures in each sighting report. Here's my controller code: def create_multiple @report = Report.new @report.name = params[:sighting]["name"] @report.sighting_date = Date.civil(params[:sighting][:"sighting_date(1i)"].to_i, params[:sighting][:"sighting_date(2i)"].to_i, params[:sighting][:"sighting_date(3i)"].to_i) @report.county_id = params[:sighting][:county] @report.location = params[:sighting][:location] @report.notes = params[:sighting][:notes] @report.email = params[:sighting][:email] @report.save! @report.reload for sr in params[:sighting_report] do sighting = SightingReport.new sighting.report_id = @report.id sighting.species_id = sr[:species_id] sighting.number_seen = sr[:number_seen] sighting.save if sr[:picture] sighting.reload for pic in sr[:picture] do p = SpeciesPic.new p.uploaded_picture = pic p.species_id = sighting.species_id p.report_id = @report.id p.save! end end end redirect_to :action => 'new_multiple' end

    Read the article

  • How do I find a source code position from an address given by a crash in Window CE

    - by Shane MacLaughlin
    I have a Windows mobile 4.0 application, written using EVC++ 4.0 SP4 with MFC, that is exhibiting a random occasional crash in the field. e.g. Exception ox800000002 at 00112584. It does not happen under various emulators and simulators, hence is very difficult to trace using a debugger. The crash throws up and address and exception type. Given that I have the PDB is there any way to track this address to the source. I can't recompile using VC++ 8 as it doesn't support the mobile 4 SDK. My guess is that without a stack trace I'm not going to have much joy, as the chances are that the exception may not be in my source. Worth a try all the same. Edit As suggested, I have looked at the address in the context of the .MAP file for the program. This reveals the following Address Publics by Value Rva+Base Lib:Object 0001:00000000 ?GetUnduValue@@YANMM@Z 00011000 f 7Par.obj ' ' ' 0001:001124b8 ?OnLButtonUp@CGXGridUserDragSelectRangeImp@@UAAHPAVCGXGridCore@@AAVCPoint@@AAI@Z 001234b8 f gxseldrg.obj 0001:001126d8 ?OnSelDragStart@CGXGridUserDragSelectRangeImp@@UAAHPAVCGXGridCore@@KK@Z 001236d8 f gxseldrg.obj Which suggests the error occured during CGXGridUserDragSelectRangeImp::OnLButtonUp(), which seems a bit odd as I don't think there was a mouse / keyboard / screen button pressed at the time. Could be the stack got fragged before the crash got reported, and I'm wasting my time. I'll recompile with assembler output to try to isolate it to a given line, but don't hold out much hope :( Does the fact that the map file reports segmented addresses e.g. 0001:xxxxxxxxx and the crash report unsegmented addresses mean I have to carry out some computation to get the map address from the crash address?

    Read the article

  • How to deploy RSWebParts.cab manually?

    - by denni
    I'm using the SSRS 2005 Web parts to display my reports in a MOSS 2007 SP1 Portal. I have successfully installed the Web parts in my development, testing, and UAT servers using the following command: stsadm -o addwppack -filename path/to/RSWebParts.cab. But when I tried running the same command in the production server, it will give me the following error: This solution contains no resources scoped for a Web application and cannot be deployed to a particular Web application. I know I usually will get this kind of error message when I tried to deploy my custom solutions having no Web application resources (such as web.config entries) to a specific Web application. But this is not my custom solution, it is an out-of-the-box SSRS Web part and it does have resources scoped to a Web application. I tried to even use different combination of the command by providing the -url, -globalinstall, and -force switches but it still give the same error. The configuration of the 4 servers are exactly the same, both from software and hardware perspectives. All other features are working properly on the production server. I even tried to extract the cab file manually to the bin folder of my Web application, then modify the Web.config manually to include the SafeControl element (copied from the manifest.xml inside the cab file). But it gave me an error saying it couldn't find the resources file. Even though, I extracted the whole file, including the resource files in the bin folder. Is there anyone who can help me resolve the problem? Thanks a lot.

    Read the article

  • Objects in JavaScript defined and undefined at the same time (in a FireFox extension)

    - by Alexey Romanov
    I am chasing down a bug in a FireFox extension. I've finally managed to see it for myself (I've only had reports before) and I can't understand how what I saw is possible. One error message from my extension in the Error Console is "gBrowser is not defined". This by itself would be surprising enough, since the overlay is over browser.xul and navigator.xul, and I expect gBrowser to be available from both. Even worse is the actual place where it happens: line 101 of nextplease.js. That is, inside the function isTopLevelDocument, which is only called from onContentLoaded, which is only called from onLoad here: gBrowser.addEventListener(this.loadType, function (event) { nextplease.loadListener.onContentLoaded(event); }, true); So gBrowser is defined in onLoad, but somehow undefined in isTopLevelDocument. When I tried to actually use the extension, I got another error: "nextplease is not defined". The interesting thing is that it happened on lines 853 and 857. That is, inside the functions nextplease.getNextLink = function () { nextplease.getLink(window.content, nextplease.NextPhrasesMap, nextplease.NextImagesMap, nextplease.isNextRegExp, nextplease.NEXT_SEARCH_TYPE); } nextplease.getPrevLink = function () { nextplease.getLink(window.content, nextplease.PrevPhrasesMap, nextplease.PrevImagesMap, nextplease.isPrevRegExp, nextplease.PREV_SEARCH_TYPE); } So nextplease is somehow defined enough to call these functions, but isn't defined inside them. Finally, executing typeof(nextplease) in Execute JS returns "object". Same for gBrowser. How can this happen? Any ideas?

    Read the article

  • Free tools/libraries to compare tables with filtering in different databases and visualize/sync diff

    - by MicMit
    I am building certain GUI in C# for a content manager and looking for the tools or code snippets or open libraries ( code ideally in C# ) which allow me the following : 1. For table A in database X (test ) and table A in database Y (production) and for a simple filter ( e.g. listname = "XYZ" ) I need to show additions/deletions/updates in some way. which might be side-by-side or just html report 2 record added html table with some fields 2 record deleted html table with some fields Considering that this task is very common, I guess, certain components should exist ? Components either return some collections from parameters given for further visualizing or just produce reports mentioned above. 2. I need to push changes for the filter I mentioned in 1 and update table in production database for this filter only ( ie for the particular list approved by content person). Again probably there are certain SQL code generators - components in addition to diffs or standalone. 3. The key thing tools/libraries - should be suitable for integration with the existing application in C#.

    Read the article

  • 550 Error When I try to get the size of a file on an FTP

    - by Eric
    I'm trying to use an FtpWebRequest to get the size of a file on a company FTP. Yet whenever I try to get the response an exception is thrown. See the error details in the catch block in the code below. string uri = "ftp://ftp.domain.com/folder/folder/file.xxx"; FtpWebRequest sizeReq = (FtpWebRequest)WebRequest.Create(uri); sizeReq.Method = WebRequestMethods.Ftp.GetFileSize; sizeReq.Credentials = cred; sizeReq.UsePassive = proj.ServerConfig.UsePassive; //true sizeReq.UseBinary = proj.ServerConfig.UseBinary; //true sizeReq.KeepAlive = proj.ServerConfig.KeepAlive; //false long size; try { //Exception thrown here when I try to get the response using (FtpWebResponse fileSizeResponse = (FtpWebResponse)sizeReq.GetResponse()) { size = fileSizeResponse.ContentLength; } } catch(WebException exp) { FtpWebResponse resp = (FtpWebResponse)exp.Response; MessageBox.Show(exp.Message); // "The remote server returned an error: (550) File unavailable (e.g., file not found, no access)." MessageBox.Show(exp.Status.ToString()); //ProtcolError MessageBox.Show(resp.StatusCode.ToString()); // ActionNotTakenFileUnavailable MessageBox.Show(resp.StatusDescription.ToString()); //"550 SIZE: Operation not permitted\r\n" } This code does work, however, when connected to my personal FTP. The StatusDescription of the response says that the operation is "not permitted". Could it be that my office FTP just wont allow for the querying of a file size? I've also tried listing the directory details, which will return the size, and have noticed that my office FTP reports the directory details in a different format then my personal FTP. Maybe this is the problem? //work ftp ListDirectoryDetails -rw-r--r-- 1 (?) user 12345 Nov 16 20:28 some file name.xxx //personal ftp ListDirectoryDetails -rw-r--r-- 1 user user 12345 Mar 13 some file name.xxx From reading this blog post I think that my personal ftp is returning a Unix formatted response, but my work is returning a windows formatted response. Maybe this is unrelated but I thought I'd mention it.

    Read the article

  • Access ADP - For/Against?

    - by webworm
    I have been tasked with taking an Access 97 application and moving the back-end data to SQL Server while moving the front end to Access 2003 (using Access Data Projects). In the process of this migration the back-end data structures will be changed significantly to support new functionality. If I had my wish we would not be using Access as the front end. I think our application would be much better served by WinForms, WPF, or a web application. We have the time needed to properly plan a business logic layer and implement an excellent solution but powers above me want to stay with Access because that is what they are familiar with. What I could use help with is pros/cons of continuing down this path of Access development. What are some legitimate arguments for and against using Access 2003? Here is what I have come up with so far. Pro Access: Already own Access 2003 licenses Easy GUI development Reports look nice Against Access Having to use VBA (Visual Basic for Applications) ADO vs DAO. Didn't Microsoft change things from Access 2002 to Access 2003? Not tied to Access runtime Choice in front end (WPF, WinForms, even ASP.NET) Maintainability True separation of logic from UI not possible Does Microsoft still support Access ADP? Perhaps there are other issues I am not aware off both for and against Access for application development. I am trying to keep an open mind while at the same time trying to maintain my sanity. I have been using C# since .NET was released and the thought of going back to VBA for six months makes my head hurt. Especially when I feel I could offer so much more if allowed to develop with modern languages and tools?

    Read the article

  • Dependency Injection for objects that require parameters

    - by Andrew
    All of our reports are created from object graphs that are translated from our domain objects. To enable this, we have a Translator class for each report, and have been using Dependency Injection for passing in dependencies. This worked great, and would yield nice classes structured like this: public class CheckTranslator : ICheckTranslator { public CheckTranslator (IEmployeeService empSvc , IPaycheckService paySvc) { _empSvc = empSvc; _paySvc = paySvc; } public Check CreateCheck() { //do the translation... } } However, in some cases the mapping has many different grouping options. As a result, the c-tor would turn into a mix of class dependencies and parameters. public class CheckTranslator : ICheckTranslator { public CheckTranslator (IEmployeeService empSvc , IPaycheckService paySvc , bool doTranslateStubData , bool doAttachLogo) { _empSvc = empSvc; _paySvc = paySvc; _doTranslateStubData = doTranslateStubData; _doAttachLogo = doAttachLogo; } public Check CreateCheck() { //do the translation... } } Now, we can still test it, but it no longer really works with an IoC container, at least in a clean fashion. Plus, we can no longer call the CreateCheck twice if the settings are different for each check. While I recognize it's a problem, I don't necessarily see the right solution. It seems kind of strange to create a Factory for each class ... or is this the best way?

    Read the article

  • jQuery server ping slowly but surely filling memory?

    - by danspants
    I use the following piece of code to test if our server is running whilst the user is in a page. I've also started adding other functions that grab small amounts of data that are constantly changing and are to be relayed to the user (Files waiting for download, messages, reports etc). I've noticed recently that if I leave any page open (all pages contain the same function), the browser takes up more and more system memory which I can only attribute to this regular task (overnight it reached 1.6 gb). Is there some way of clearing out the data that is being accumulated? Is this normal behaviour? As far as i can tell, every time I call the function it should overwrite the previously retrieved data? function testServer(){ jQuery.ajax({ type:"HEAD", url:"/media/d_arrow_blue.png", error: function(msg) { jQuery.jGrowl("Server Disconnected"); } }); //retrieves count of files awaiting download - move to seperate function jQuery.get("/get_files/",{"type":"count"},function(data) { jQuery("#downloadList").children("div").text(data); }); }; jQuery().doTimeout(6000,function() { testServer(); return true; });

    Read the article

  • cgi.FieldStorage always empty - never returns POSTed form Data

    - by Dan Carlson
    This problem is probably embarrassingly simple. I'm trying to give python a spin. I thought a good way to start doing that would be to create a simple cgi script to process some form data and do some magic. My python script is executed properly by apache using mod_python, and will print out whatever I want it to print out. My only problem is that cgi.FieldStorage() is always empty. I've tried using both POST and GET. Each trial I fill out both form fields. <form action="pythonScript.py" method="POST" name="ARGH"> <input name="TaskName" type="text" /> <input name="TaskNumber" type="text" /> <input type="submit" /> </form> If I change the form to point to a perl script it reports the form data properly. The python page always gives me the same result: number of keys: 0 #!/usr/bin/python import cgi def index(req): pageContent = """<html><head><title>A page from""" pageContent += """Python</title></head><body>""" form = cgi.FieldStorage() keys = form.keys() keys.sort() pageContent += "<br />number of keys: "+str(len(keys)) for key in keys: pageContent += fieldStorage[ key ].value pageContent += """</body></html>""" return pageContent I'm using Python 2.5.2 and Apache/2.2.3. This is what's in my apache conf file (and my script is in /var/www/python): <Directory /var/www/python/> Options FollowSymLinks +ExecCGI Order allow,deny allow from all AddHandler mod_python .py PythonHandler mod_python.publisher </Directory>

    Read the article

  • How much of STL is too much?

    - by Darius Kucinskas
    I am using a lot of STL code with std::for_each, bind, and so on, but I noticed that sometimes STL usage is not good idea. For example if you have a std::vector and want to do one action on each item of the vector, your first idea is to use this: std::for_each(vec.begin(), vec.end(), Foo()) and it is elegant and ok, for a while. But then comes the first set of bug reports and you have to modify code. Now you should add parameter to call Foo(), so now it becomes: std::for_each(vec.begin(), vec.end(), std::bind2nd(Foo(), X)) but that is only temporary solution. Now the project is maturing and you understand business logic much better and you want to add new modifications to code. It is at this point that you realize that you should use old good: for(std::vector::iterator it = vec.begin(); it != vec.end(); ++it) Is this happening only to me? Do you recognise this kind of pattern in your code? Have you experience similar anti-patterns using STL?

    Read the article

  • Does GetSystemInfo (on Windows) always return the number of logical processors?

    - by mhughes
    Reading up on this, and specifically reading the Microsoft docs, it looks like it should be returning the number of PHYSICAL processors, and that you should use GetLogicalProcessorInformation to figure out how many LOGICAL processors you have. Here's the doc I found on the SYSTEM_INFO structure: http://msdn.microsoft.com/en-us/library/ms724958(v=VS.85).aspx And here's the doc on GetLogicalProcessorInformation: (spaces added to get through spam filter) http:// msdn.microsoft.com/ en-us/ library/ ms683194.aspx Reading up on it further though, in most of the discussions I've found on this topic, developers say to that GetSystemInfo (and the SYSTEM_INFO structure) report the number of LOGICAL processors. When I search again, I find that MS did release some info on this (and a hot fix), here (spaces added to get through spam filter): http:// support. microsoft.com/ kb/936235 Reading that, it sounds like on Xp, pre-service Pack 3, GetSystemInfo reports the number of LOGICAL processors in the SYSTEM_INFO structure. It also reads to me that on Windows Vista and Windows 7, GetSystemInfo should be reporting the number of PHYSICAL processors (different to Windows XP pre-service Pack 3). Does anyone know what it actually does? Does GetSystemInfo really report the number of physical processors (on the same computer) differently, depending on which OS it's running on?

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >