Search Results

Search found 7208 results on 289 pages for 'export to pdf'.

Page 253/289 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?

    - by apenwarr
    I'm trying to run the code below to insert a whole lot of records (from a file with a weird file format) into my Access 2003 database from VBA. After many, many experiments, this code is the fastest I've been able to come up with: it does 10000 records in about 15 seconds on my machine. At least 14.5 of those seconds (ie. almost all the time) is in the single call to UpdateBatch. I've read elsewhere that the JET engine doesn't support UpdateBatch. So maybe there's a better way to do it. Now, I would just think the JET engine is plain slow, but that can't be it. After generating the 'testy' table with the code below, I right clicked it, picked Export, and saved it as XML. Then I right clicked, picked Import, and reloaded the XML. Total time to import the XML file? Less than one second, ie. at least 15x faster. Surely there's an efficient way to insert data into Access that doesn't require writing a temp file? Sub TestBatchUpdate() CurrentDb.Execute "create table testy (x int, y int)" Dim rs As New ADODB.Recordset rs.CursorLocation = adUseServer rs.Open "testy", CurrentProject.AccessConnection, _ adOpenStatic, adLockBatchOptimistic, adCmdTableDirect Dim n, v n = Array(0, 1) v = Array(50, 55) Debug.Print "starting loop", Time For i = 1 To 10000 rs.AddNew n, v Next i Debug.Print "done loop", Time rs.UpdateBatch Debug.Print "done update", Time CurrentDb.Execute "drop table testy" End Sub I would be willing to resort to C/C++ if there's some API that would let me do fast inserts that way. But I can't seem to find it. It can't be that Application.ImportXML is using undocumented APIs, can it?

    Read the article

  • How does PHP PDO work internally ?

    - by Rachel
    I want to use pdo in my application, but before that I want to understand how internally PDOStatement->fetch and PDOStatement->fetchAll. For my application, I want to do something like "SELECT * FROM myTable" and insert into csv file and it has around 90000 rows of data. My question is, if I use PDOStatement->fetch as I am using it here: // First, prepare the statement, using placeholders $query = "SELECT * FROM tableName"; $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Will after every fetch from database, result for that fetch would be store in memory ? Meaning when I do second fetch, memory would have data of first fetch as well as data for second fetch. And so if I have 90000 rows of data and if am doing fetch every time than memory is being updated to take new fetch result without removing results from previous fetch and so for the last fetch memory would already have 89999 rows of data. Is this how PDOStatement::fetch works ? Performance wise how does this stack up against PDOStatement::fetchAll ?

    Read the article

  • Need help in tuning a sql-query

    - by Viper
    Hello, i need some help to boost this SQL-Statement. The execution time is around 125ms. During the runtime of my program this sql (better: equally structured sqls for different tables) will be called 300.000 times. The average row count in the tables lies around 10.000.000 rows and new rows (updates/inserts) will be added with a timestamp each day. Data which are interesting for this particular export-program lies in the last 1-3 days. Maybe this is helpful for an index to create. The data i need is the current valid row for a given id and the forerunner datarow to get the updates (if exists). We use a Oracle 11g database and Dot.Net Framework 3.5 SQL-Statement to boost: select ID_SOMETHING, -- Number(12) ID_CONTRIBUTOR, -- Char(4 Byte) DATE_VALID_FROM, -- DATE DATE_VALID_TO -- DATE from TBL_SOMETHING XID where ID_SOMETHING = :ID_INSTRUMENT and ID_CONTRIBUTOR = :ID_CONTRIBUTOR and DATE_VALID_FROM <= :EXPORT_DATE and DATE_VALID_TO >= :EXPORT_DATE order by DATE_VALID_FROM asc; Here i uploaded the current Explain-Plan for this query. I'm not a database expert so i don't know which index-type would fit best for this requirement. I have seen that there are many different possible index-types which could be applied. Maybe Oracle Optimizer Hints are helpful, too. Does anyone has a good idea for tuning this sql or can point me in a right direction?

    Read the article

  • sIFR 3 rev 436 - copy link to clipboard

    - by Alan Forsyth
    This was originally a question, but is now a code enhancement, since it's a very minor (but useful) update. When heading (or other) text is used as a link with sIFR 3, you now get the two 'open link / open link in new window' options in the right-click flash context menu for the link. When I came across sIFR for the first time yesterday, I was wanting to copy a header (h2) link to the clipboard, on a site that used sIFR 2.x, and was frustrated that I couldn't. Thanks to the wonders of open source (and well written code), I can suggest the following enhancement to sIFR 3: [In the file flash/sIFR.as, find the section starting with the comment "// Have to set up menu items first!" through to ");" and replace with the following, then add font information to the .fla and export the swf as per the tutorial:] // Have to set up the menu items first! menuItems.push( new ContextMenuItem("Follow link", function() { getURL(sIFR.instance.primaryLink, sIFR.instance.primaryLinkTarget) }), new ContextMenuItem("Open link in new window", function() { getURL(sIFR.instance.primaryLink, "_blank") }), new ContextMenuItem("Copy link to clipboard", function() { System.setClipboard(sIFR.instance.primaryLink) }) ); Now I'm happy... :-) Alan.

    Read the article

  • How to determine which source files are required for an Eclipse run configuration

    - by isme
    When writing code in an Eclipse project, I'm usually quite messy and undisciplined in how I create and organize my classes, at least in the early hacky and experimental stages. In particular, I create more than one class with a main method for testing different ideas that share most of the same classes. If I come up with something like a useful app, I can export it to a runnable jar so I can share it with friends. But this simply packs up the whole project, which can become several megabytes big if I'm relying on large library such as httpclient. Also, if I decide to refactor my lump of code into several projects once I work out what works, and I can't remember which source files are used in a particular run configuration, all I can do it copy the main class to a new project and then keep copying missing types till the new project compiles. Is there a way in Eclipse to determine which classes are actually used in a particular run configuration? EDIT: Here's an example. Say I'm experimenting with web scraping, and so far I've tried to scrape the search-result pages of both youtube.com and wrzuta.pl. I have a bunch of classes that implement scraping in general, a few that are specific to each of youtube and wrzuta. On top of this I have a basic gui common to both scrapers, but a few wrzuta- and youtube-specific buttons and options. The WrzutaGuiMain and YoutubeGuiMain classes each contain a main method to configure and show the gui for each respective website. Can Eclipse look at each of these to determine which types are referenced?

    Read the article

  • multiple calender in exchange web service

    - by user3559462
    i have multiple calender in my mailbox, i can retrieve only one calender that is main calnder folder using ews api 2.0, now i want whole list of calenders and appointments and meetings in that. like i have three calender one is main calnder 1.Calender(color-code:default) 2.Jorgen(color-code:pink) 3.Soren(color-code: yellow) i can retrieve all the values of main "Calnder", using the below code Folder inbox = Folder.Bind(service, WellKnownFolderName.Calendar); view.PropertySet = new PropertySet(BasePropertySet.IdOnly); // This results in a FindItem call to EWS. FindItemsResults<Item> results = inbox.FindItems(view); i = 1; m = results.TotalCount; if (results.Count() > 0) { foreach (var item in results) { PropertySet props = new PropertySet(AppointmentSchema.MimeContent,AppointmentSchema.ParentFolderId,AppointmentSchema.Id,AppointmentSchema.Categories,AppointmentSchema.Location); // This results in a GetItem call to EWS. var email = Appointment.Bind(service, item.Id, props); string iCalFileName = @"C:\export\appointment" +i ".ics"; // Save as .eml. using (FileStream fs = new FileStream(iCalFileName, FileMode.Create, FileAccess.Write)) { fs.Write(email.MimeContent.Content, 0, email.MimeContent.Content.Length); } i++; } now i want to get all the remaining calender schedules also, i am not able to get is Please help, need it urgently

    Read the article

  • Converting FoxPro Date type to SQL Server 2005 DateTime using SSIS

    - by Avrom
    Hi, When using SSIS in SQL Server 2005 to convert a FoxPro database to a SQL Server database, if the given FoxPro database has a date type, SSIS assumes it is an integer type. The only way to convert it to a dateTime type is to manually select this type. However, that is not practical to do for over 100 tables. Thus, I have been using a workaround in which I use DTS on SQL Server 2000 which converts it to a smallDateTime, then make a backup, then a restore into SQL Server 2005. This workaround is starting to be a little annoying. So, my question is: Is there anyway to setup SSIS so that whenever it encounters a date type to automatically assume it should be converted to a dateTime in SQL Server and apply that rule across the board? Update To be specific, if I use the import/export wizard in SSIS, I get the following error: Column information for the source and the destination data could not be retrieved, or the data types of source columns were not mapped correctly to those available on the destination provider. Followed by a list of a given table's date columns. If I manually set each one to a dateTime, it imports fine. But I do not wish to do this for a hundred tables.

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • How to change attribute on Scala XML Element

    - by Dave
    I have an XML file that I would like to map some attributes of in with a script. For example: <a> <b attr1 = "100" attr2 = "50"> </a> might have attributes scaled by a factor of two: <a> <b attr1 = "200" attr2 = "100"> </a> This page has a suggestion for adding attributes but doesn't detail a way to map a current attribute with a function (this way would make that very hard): http://www.scalaclass.com/book/export/html/1 What I've come up with is to manually create the XML (non-scala) linked-list... something like: // a typical match case for running thru XML elements: case Elem(prefix, e, attributes, scope, children @ _*) => { var newAttribs = attributes for(attr <- newAttribs) attr.key match { case "attr1" => newAttribs = attribs.append(new UnprefixedAttribute("attr1", (attr.value.head.text.toFloat * 2.0f).toString, attr.next)) case "attr2" => newAttribs = attribs.append(new UnprefixedAttribute("attr2", (attr.value.head.text.toFloat * 2.0f).toString, attr.next)) case _ => } Elem(prefix, e, newAttribs, scope, updateSubNode(children) : _*) // set new attribs and process the child elements } Its hideous, wordy, and needlessly re-orders the attributes in the output, which is bad for my current project due to some bad client code. Is there a scala-esque way to do this?

    Read the article

  • Yaml::load_file acting different between development and production (Rails)

    - by James
    Hi, I am completely stumped at the nature of this problem. We export data from our application into a 'cleaned' YAML file (stripping out IDs, created_at etc). Then we (will) allow users to import these files back into the application - it is the import that is completely bugging me out. In development, YAML::load_file(params[:uploaded_data].local_path) returns an array of YAML::Objects's (and it doesn't matter which of the number of different ways the file could be loaded): [#{"exception_count"="0", "title"="Start", "amount"="70.00", "colour"=nil, "repeat_type_id"="0", "repeat_interval"="1"}}, etc etc] Which is very nice, as the attributes also include the (associated model) exceptions that you see an exception_count for. However on production (rails 2.3.2, running REE 1.8.7 and 1.8.6 for testing, tested on two different production env's, and running production locally) it returns an array of the Objects within the YAML - in this case, Event: [#, repeat_type_id: 0, colour: nil, repeat_interval: 1, exception_count: 0, etc etc] Now this would be just perplexing if it also included the associated model Exception with it - however it doesn't. Can anyone at all shed some light on why the Yaml parser would behave so differently between production and development? I'm on rails 2.3.2, running REE 1.8.7; however I've also tested running Ruby 1.8.6 with exactly the same results. Thanks for any help!

    Read the article

  • MEF and ASP.NET MVC

    - by denis_n
    I want to use MEF with asp.net mvc. I wrote following controller factory: public class MefControllerFactory : DefaultControllerFactory { private CompositionContainer _Container; public MefControllerFactory(Assembly assembly) { _Container = new CompositionContainer(new AssemblyCatalog(assembly)); } protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { if (controllerType != null) { var controllers = _Container.GetExports<IController>(); var controllerExport = controllers.Where(x => x.Value.GetType() == controllerType).FirstOrDefault(); if (controllerExport == null) { return base.GetControllerInstance(requestContext, controllerType); } return controllerExport.Value; } else { throw new HttpException((Int32)HttpStatusCode.NotFound, String.Format( "The controller for path '{0}' could not be found or it does not implement IController.", requestContext.HttpContext.Request.Path ) ); } } } In Global.asax.cs I'm setting my controller factory: protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); ControllerBuilder.Current.SetControllerFactory(new MefControllerFactory.MefControllerFactory(Assembly.GetExecutingAssembly())); } I have an area: [Export(typeof(IController))] [PartCreationPolicy(CreationPolicy.NonShared)] public class HomeController : Controller { private readonly IArticleService _articleService; [ImportingConstructor] public HomeController(IArticleService articleService) { _articleService = articleService; } // // GET: /Articles/Home/ public ActionResult Index() { Article article = _articleService.GetById(55); return View(article); } } IArticleService is an interface. There is a class which implements IArticleService and Exports it. It works. Is this everything what I need for working with MEF? How can I skip setting PartCreationPolicy and ImportingConstructor for controller? I want to set my dependencies using constructor. When PartCreationPolicy is missing, I get following exception: A single instance of controller 'MvcApplication4.Areas.Articles.Controllers.HomeController' cannot be used to handle multiple requests. If a custom controller factory is in use, make sure that it creates a new instance of the controller for each request.

    Read the article

  • How to store a public key in a machine-level RSA key container

    - by Andrew Kimball
    I'm having a problem using a machine level RSA key container when storing only the public key of a public/private key pair. The following code creates a public/private pair and extracts the public key from that pair. The pair and the public key are stored in separate key containers. The keys are then obtained from those key containers at which point they should be the same as the keys going into the containers. The code works when CspProviderFlags.UseDefaultKeyContainer is specified for CspParameters.Flags (i.e. the key read back out from the PublicKey container is the same), but when CspProviderFlags.UseMachineKeyStore is specified for CspParameters.Flags the key read back from PublicKey is different. Why is the behaviour different, and what do I need to do differently to retrieve the public key from a machine-level RSA key container? var publicPrivateRsa = new RSACryptoServiceProvider(new CspParameters() { KeyContainerName = "PublicPrivateKey", Flags = CspProviderFlags.UseMachineKeyStore //Flags = CspProviderFlags.UseDefaultKeyContainer } ) { PersistKeyInCsp = true, }; var publicRsa = new RSACryptoServiceProvider(new CspParameters() { KeyContainerName = "PublicKey", Flags = CspProviderFlags.UseMachineKeyStore //Flags = CspProviderFlags.UseDefaultKeyContainer } ) { PersistKeyInCsp = true }; //Export the key. publicRsa.ImportParameters(publicPrivateRsa.ExportParameters(false)); Console.WriteLine(publicRsa.ToXmlString(false)); Console.WriteLine(publicPrivateRsa.ToXmlString(false)); //Dispose those two CSPs. using (publicRsa) { publicRsa.Clear(); } using (publicPrivateRsa) { publicRsa.Clear(); } publicPrivateRsa = new RSACryptoServiceProvider(new CspParameters() { KeyContainerName = "PublicPrivateKey", Flags = CspProviderFlags.UseMachineKeyStore //Flags = CspProviderFlags.UseDefaultKeyContainer } ); publicRsa = new RSACryptoServiceProvider(new CspParameters() { KeyContainerName = "PublicKey", Flags = CspProviderFlags.UseMachineKeyStore //Flags = CspProviderFlags.UseDefaultKeyContainer } ); Console.WriteLine(publicRsa.ToXmlString(false)); Console.WriteLine(publicPrivateRsa.ToXmlString(false)); using (publicRsa) { publicRsa.Clear(); } using (publicPrivateRsa) { publicRsa.Clear(); }

    Read the article

  • How do display a UserControl with satisfying all imports

    - by Yost
    Dear SO, I'm having some issues with Silverlight 4/ MEF. I have a basic framework setup with a Silverlight Navigation app at the core. Image link to the diagram for clarification The main app (Desu) contains some pages and controls that export en import nicely. I dynamicly load controls from Desu.Controls(like an imageviewer which i identify with the IImageViewer interface) and some pages from Desu.Pages. The first problem I had was with dynamicly loading pages and being able to navigate to these pages (eg. use dummyhttp://blagh/desutestpage.aspx#/Activation when Desu.Pages was loaded from the xap). I solved this by using a custom MetaAttribute and a custom contentloader. Now for the question part: I want to load the ImageViewerControl from Desu.Controls in HomePage in Desu. I haven't loaded the Desu.Controls into the package though. When i try to load the control it gives me CompositionException because it can't satisfy the ImageViewControl import. I tried setting AllowRecomposition=true but that didn't help. So is it possible to load a control without satisfying all imports and, if yes, how does one do this?

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • Eclipse won't believe I have Maven 2.2.1

    - by Andrew Clegg
    I have a project (built from an AppFuse template) that requires Maven 2.2.1. So I upgraded to this (from 2.1.0) and set my path and my M2_HOME and MAVEN_HOME env variables. Then I ran mvn eclipse:eclipse and imported the project into Eclipse (Galileo). However, in the problems list for the project (and at the top of the pom.xml GUI editor) it says: Unable to build project '/export/people/clegg/data/GanymedeWorkspace/funcserve/pom.xml; it requires Maven version 2.2.1 (Please ignore 'GanymedeWorkspace', I really am using Galileo!) This persists whether I set Eclipse to use its Embedded Maven implementation, or the external 2.2.1 installation, in the Preferences - Maven - Installations dialog. I've tried closing and reopening the project, reindexing the repository, cleaning the project, restarting the IDE, logging out and back in again, everything I can think of! But Eclipse still won't believe I have Maven 2.2.1. I just did a plugin update so I have the latest version of Maven Integration for Eclipse -- 0.9.8.200905041414. Does anyone know how to convince Eclipse I really do have the right version of Maven? It's like it's recorded the previous version somewhere else and won't pay any attention to my changes :-( Many thanks! Andrew.

    Read the article

  • please convert this PHP code in ruby

    - by Arpit Vaishnav
    <?php // amcharts.com export to image utility // set image type (gif/png/jpeg) $imgtype = 'jpeg'; // set image quality (from 0 to 100, not applicable to gif) $imgquality = 100; // get data from $_POST or $_GET ? $data = &$_POST; // get image dimensions $width = (int) $data['width']; $height = (int) $data['height']; // create image object $img = imagecreatetruecolor($width, $height); // populate image with pixels for ($y = 0; $y < $height; $y++) { // innitialize $x = 0; // get row data $row = explode(',', $data['r'.$y]); // place row pixels $cnt = sizeof($row); for ($r = 0; $r < $cnt; $r++) { // get pixel(s) data $pixel = explode(':', $row[$r]); // get color $pixel[0] = str_pad($pixel[0], 6, '0', STR_PAD_LEFT); $cr = hexdec(substr($pixel[0], 0, 2)); $cg = hexdec(substr($pixel[0], 2, 2)); $cb = hexdec(substr($pixel[0], 4, 2)); // allocate color $color = imagecolorallocate($img, $cr, $cg, $cb); // place repeating pixels $repeat = isset($pixel[1]) ? (int) $pixel[1] : 1; for ($c = 0; $c < $repeat; $c++) { // place pixel imagesetpixel($img, $x, $y, $color); // iterate column $x++; } } } // set proper content type header('Content-type: image/'.$imgtype); header('Content-Disposition: attachment; filename="chart.'.$imgtype.'"'); // stream image $function = 'image'.$imgtype; if ($imgtype == 'gif') { $function($img); } else { $function($img, null, $imgquality); } // destroy imagedestroy($img); ?

    Read the article

  • Exporting dataset to Excel file with multiple sheets in ASP.NET

    - by engg
    In C# ASP.NET 3.5 web application, I need to export multiple datatables (or a dataset) to an Excel 2007 file with multiple sheets, and then provide the user with 'Open/Save' dialog box, WITHOUT saving the Excel file on the web server. I have used Excel Interop before. I have been reading that it's not efficient and is not the best approach to achieve this and there are more ways to do it, 2 of them being: 1) Converting data in datatables to an XML string that Excel understands 2) Using OPEN XML SDK 2.0. It looks like OPEN XML SDK 2.0 is better, please let me know. Are there any other ways to do it? I don't want to use any third-party tools. If I use OPEN XML SDK, it creates an excel file, right? I don't want to save it on the (Windows 2003) server hard drive (I don't want to use Server.MapPath, these Excel files are dynamically created, and they are not required on the server, once client gets them). I directly want to prompt the user to open/save it. I know how to do it when the 'XML string' approach is used. Please help. Thank you.

    Read the article

  • How do I exclude data from local table schema_migrations from being pushed to Heroku DB?

    - by Thierry Lam
    I was able to push my Ruby on Rails app with MySQL(local dev) to the Heroku server along with migrating my model with the command heroku rake db:migrate. I have also read the documentation on Database Import/Export. Is that doc referring to pushing actual data from my local dev DB to whichever Heroku's DB? Do I need to modify anything in the file database.yml to make it happen? I ran the following command: heroku db:push and I am getting the error: Sending data 2 tables, 3 records !!! Caught Server Exception | ETA: --:--:-- Taps Server Error: PGError ERROR: duplicate key value violates unique constraint "unique_schema_migrations" I have 2 tables, one I create for my app and the other schema_migrations. The total number of entries among the 2 tables is 3. I'm also printing the number of entries I have in the table I have created and it's showing 0. Any ideas what I might be missing or what I am doing wrong? EDIT: I figured out the above, Heroku's DB already have schema_migrations the moment I ran migrate. New question: Does anyone know how I can exclude data from a specific table from being pushed to Heroku DB. The table to exclude in this case will be schema_migrations. Not so good solution: I googled around and someone else was having the same issue. He suggested naming the schema_migrations table to zschema_migrations. In this way data from the other tables will be pushed properly until it fails on the last table. It's a pretty bad solution but will do for the time being. A better solution will be to use an existing Rails command which can reset a specific table from a database. I don't think Rake can do that.

    Read the article

  • Using MEF with exporting project that uses resources (xml) contained in the xap.

    - by JSmyth
    I'm doing a proof of concept app in SL4 using MEF and as part of the app I am importing another xap from an existing Silverlight Project and displaying it in my host project. The problem is that the existing app uses some .xml files (as content) and it uses linq2xml to load these files which are (assumed to be) bundled in the xap. When I compose the application the initalization fails because the host app doesn't contain the xml files. If I copy these xml files into the host project and run it the composition works fine. However, I need to keep the xml files in the original project. Is there a way that I can download a xap and look at it's contents for xml files and then load them into the host xap at runtime so that after the compostion takes place the xml resources that are required can be found? Or should I work out some kind of contract with an import/export to pass the xml files to the host xap? As the people developing the imported xaps (should the project go ahead) are from a different company, I would like to keep changes to the way they develop their apps to a minimum.

    Read the article

  • How to override the default init.tcl

    - by Sean Murphy
    I'm working on a project where I want to make use of TCL as the command interpreter. I have a working c library object which I can load from within the tcl shell but my problem is finding a way to automatically do this while starting a tclsh. Essentially my ultimate goal is to be able to run a script and have it load my library and run some initial startup tcl code before dropping me back to the tclsh command prompt in interactive mode. e.g. tclsh -f myscript.tcl --then-switch-to-interactive or EXPORT TCLINIT=myscript.tcl tclsh The basic goal is to avoid having to distribute tclsh but rather rely in local user installations of tcl. All I would like to distribute is my library, a startup script and a shell command to launch the tclsh with the library preloaded. I've tried using the environment variables TCLINIT and TCL_LIBRARY but they seem to have no effect. The only workable solutions I've found so far are to add "source myscript.tcl" to either the end of /usr/share/tcltk/tcl8.5.init.tcl or ~/.tclshrc However both of these "solutions" are non perfect as they require modification of the default users workspace. It strikes me that there must be a way to handle this in TCL, but my research so far hasn't yielded anything. Does anyone have any suggestions?

    Read the article

  • Does git clone work through NTLM proxies?

    - by AndreaG
    I've tried both using export http_proxy=http://[username]:[pwd]@[proxy] and git config --global http.proxy http://[username]:[pwd]@[proxy]. I couldn't make it work. It looks like git uses Basic authentication: Initialized empty Git repository in /home/.../.git/ * Couldn't find host github.com in the .netrc file, using defaults * About to connect() to github.com port 8080 (#0) * Trying 10.... * Connected to github.com (10....) port 8080 (#0) * Proxy auth using Basic with user '...' > GET http://github.com/sunlightlabs/fiftystates.git/info/refs HTTP/1.1 Proxy-Authorization: Basic MD... User-Agent: git/1.6.1.2 Host: github.com Pragma: no-cache Accept: */* Proxy-Connection: Keep-Alive < HTTP/1.1 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to t he Web Proxy filter is denied. ) < Via: 1.1 ... < Proxy-Authenticate: Negotiate < Proxy-Authenticate: Kerberos < Proxy-Authenticate: NTLM < Connection: Keep-Alive < Proxy-Connection: Keep-Alive < Pragma: no-cache < Cache-Control: no-cache < Content-Type: text/html < Content-Length: 4118 * The requested URL returned error: 407 * Closing connection #0 fatal: http://github.com/sunlightlabs/fiftystates.git/info/refs download error - The requested URL returned error: 407 Google search returned mixed and probably not updated results. Somewhere it says that curl is (was?) used under the hood, but its options are (were?) hardwired into code. For example, curl --proxy-ntlm --proxy ...:8080 google.com works, and I'd like to use the same option with git. I need some more definite answers here: has anybody succeed using git through Windows proxies? Which version? Thanks.

    Read the article

  • Getting "[Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near 'Microsoft.'

    - by brohjoe
    Hi Experts, I'm getting an error, "[Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near 'Microsoft.' Here is the code: Dim conn As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object which is the DSN name. Set conn = New ADODB.Connection conn.ConnectionString = "sql_server" conn.Open 'conn.Execute (strSQL) On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE(Microsoft.ACE.OLEDB.12.0;" & _ "Data Source=C:\Workbook.xlsx;" & _ "Extended Properties=Excel 12.0; [Sales Orders])" conn.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the cntection objects Set rst = Nothing Set conn = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. conn.Close If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set conn = Nothing End If End Sub

    Read the article

  • ASIHTTPRequest wrapper usage for Macs

    - by Rob
    I am trying to apply the ASIHTTPRequest wrapper to a very basic Objective C program. I have already copied over the necessary files into my program and after giving myself an extreme headache trying to figure out how it works through their website I thought I would post a question on here. The files copied over were: ASIHTTPRequestConfig.h ASIHTTPRequestDelegate.h ASIProgressDelegate.h ASIInputStream.h ASIInputStream.m ASIHTTPRequest.h ASIHTTPRequest.m ASIFormDataRequest.h ASIFormDataRequest.m My program is very basic: #import <Foundation/Foundation.h> int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; // Defining the various variables. char firstName[20]; char lastName[20]; char rank[5]; int leaveAccrued; int leaveRequested; // User Input. NSLog (@"Please input First Name:"); scanf("%s", &firstName); NSLog (@"Please input Last Name:"); scanf("%s", &lastName); NSLog (@"Please input Rank:"); scanf("%s", &rank); NSLog (@"Please input the number leave days you have accrued:"); scanf("%i", &leaveAccrued); NSLog (@"Please input the number of leave days you are requesting:"); scanf("%i", &leaveRequested); // Print results. NSLog (@"Name: %s %s", firstName, lastName); NSLog (@"Rank: %s", rank); NSLog (@"Leave Accrued: %i", leaveAccrued); NSLog (@"Leave Requested: %i", leaveRequested); [pool drain]; return 0; } How do I utilize the wrapper to export these 5 basic variables to a web server via an http request?

    Read the article

  • How to merge duplicates in 2D python arrays

    - by Wei Lou
    Hi, I have a set of data similar to this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3_1 RTT(Avr):620ms 3 13:19:12.754 Ping3_2 RTT(Avr):210ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms ... when i parse through it, i need merge the results of Ping3_1 and Ping3_2. Then take average of those two row export as one row. So the end of result would be like this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3 RTT(Avr):415ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms currently i am concatenating column 0 and 1 to make a unique key, find duplication there then doing rest of special treatment for those parallel Pings. It is not elegant at all. Just wonder what is the better way to do it. Thanks!

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >