Search Results

Search found 4099 results on 164 pages for 'bulk export'.

Page 144/164 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Projects integration question

    - by qkrsppopcmpt
    Other team has one legacy system, which is data aggregators. It is implemented as web service using JAVA, SOAP,MTOM,Tomcat and Axis2. They have wsdl files defining functionalities such as search, retrieve data, upload, download. Our team has a new developed website which is developed using RoR with mySQL. It is sort of social networking. Users can register, add friends, upload images, videos. Also, they can search data. We are required to connect the two systems. One possible solution I think is - Adding components into our website. The component invoke services on the aggregators. - Synchronize website database to the aggregators. My doubts are: 1. How to add components in our websites? The components should use Java or Ruby or adapter from java to ruby. It is possible using ruby invoke web service. I think it should work since it is the point of web service. If so, can ruby call those services in wsdl directly? But how to deal with those different data structure? How to synchroinze our database to the aggregators. I think the best way is also through web service invocation such as upload. That means, we have to export the db records into xml files and then write some tools to upload. The web service project support MTOM. So, it is fine to upload huge data. Am I on the right record? Can anybody give me some hint. Thanks.

    Read the article

  • Summarising grouped records in a dataframe in R (...again)

    - by monch1962
    Hello all, (I tried to ask this question earlier today, but later realised I over-simplified the question; the answers I received were correct, but I couldn't use them because of my over-simplification of the problem in the original question. Here's my 2nd attempt...) I have a data frame in R that looks like: "Timestamp", "Source", "Target", "Length", "Content" 0.1 , P1 , P2 , 5 , "ABCDE" 0.2 , P1 , P2 , 3 , "HIJ" 0.4 , P1 , P2 , 4 , "PQRS" 0.5 , P2 , P1 , 2 , "ZY" 0.9 , P2 , P1 , 4 , "SRQP" 1.1 , P1 , P2 , 1 , "B" 1.6 , P1 , P2 , 3 , "DEF" 2.0 , P2 , P1 , 3 , "IJK" ... and I want to convert this to: "StartTime", "EndTime", "Duration", "Source", "Target", "Length", "Content" 0.1 , 0.4 , 0.3 , P1 , P2 , 12 , "ABCDEHIJPQRS" 0.5 , 0.9 , 0.4 , P2 , P1 , 6 , "ZYSRQP" 1.1 , 1.6 , 0.5 , P1 , P2 , 4 , "BDEF" ... Trying to put this into English, I want to group consecutive records with the same 'Source' and 'Target' together, then print out a single record per group showing the StartTime, EndTime & Duration (=EndTime-StartTime) for that group, along with the sum of the Lengths for that group, and a concatenation of the Content (which will all be strings) in that group. The TimeOffset values will always increase throughout the data frame. I had a look at melt/recast and have a feeling that it could be used to solve the problem, but couldn't get my head around the documentation. I suspect it's possible to do this within R, but I really don't know where to start. In a pinch I could export the data frame out and do it in e.g. Python, but I'd prefer to stay within R if possible. Thanks in advance for any assistance you can provide

    Read the article

  • WCF: Exposed Object Model - stuck in a loop

    - by Mark
    Hi I'm working on a pretty big WSSF project. I have a normal object model in the business layer. Eg a customer has an orders collection property, when this is accessed then it loads from the data layer (lazy loading). An order has a productCollection property etc etc.. Now the bit I'm finding tricky is exposing this via WCF. I want to export a collection of orders. The client app will also need information about the customers. Using the WSSF data contract designer I have set it up so that customers have a property called "order collection". This is fine if you have a customer object and would like to look at the orders but if you have an order object there is no customer property so it doesn't work going up the hierarchy. I've tried adding a customer property to the orders object but then the code gets stuck in a loop when it loads the data contracts up. This is because it doesn't load on demand like in the business layer. I need to load all properties up before the objects can be sent out via WCF. It ends up loading an order, then the customer for that order, then the orders for that customer, then the customer for that order etc etc... I'm sure I've got all this wrong. Help!!

    Read the article

  • Is it possible to parameterize a MEF import?

    - by Josh Einstein
    I am relatively new to MEF so I don't fully understand the capabilities. I'm trying to achieve something similar to Unity's InjectionMember. Let's say I have a class that imports MEF parts. For the sake of simplicity, let's take the following class as an example of the exported part. [Export] [PartCreationPolicy(CreationPolicy.NonShared)] public class Logger { public string Category { get; set; } public void Write(string text) { } } public class MyViewModel { [Import] public Logger Log { get; set; } } Now what I'm trying to figure out is if it's possible to specify a value for the Category property at the import. Something like: public class MyViewModel { [MyImportAttribute(Category="MyCategory")] public Logger Log { get; set; } } public class MyOtherViewModel { [MyImportAttribute(Category="MyOtherCategory")] public Logger Log { get; set; } } For the time being, what I'm doing is implementing IPartImportsSatisfiedNotification and setting the Category in code. But obviously I would rather keep everything neatly in one place.

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • Serializing Configurations for a Dependency Injection / Inversion of Control

    - by Joshua Starner
    I've been researching Dependency Injection and Inversion of Control practices lately in an effort to improve the architecture of our application framework and I can't seem to find a good answer to this question. It's very likely that I have my terminology confused, mixed up, or that I'm just naive to the concept right now, so any links or clarification would be appreciated. Many examples of DI and IoC containers don't illustrate how the container will connect things together when you have a "library" of possible "plugins", or how to "serialize" a given configuration. (From what I've read about MEF, having multiple declarations of [Export] for the same type will not work if your object only requires 1 [Import]). Maybe that's a different pattern or I'm blinded by my current way of thinking. Here's some code for an example reference: public abstract class Engine { } public class FastEngine : Engine { } public class MediumEngine : Engine { } public class SlowEngine : Engine { } public class Car { public Car(Engine e) { engine = e; } private Engine engine; } This post talks about "Fine-grained context" where 2 instances of the same object need different implementations of the "Engine" class: http://stackoverflow.com/questions/2176833/ioc-resolve-vs-constructor-injection Is there a good framework that helps you configure or serialize a configuration to achieve something like this without hard coding it or hand-rolling the code to do this? public class Application { public void Go() { Car c1 = new Car(new FastEngine()); Car c2 = new Car(new SlowEngine()); } } Sample XML: <XML> <Cars> <Car name="c1" engine="FastEngine" /> <Car name="c2" engine="SlowEngine" /> </Cars> </XML>

    Read the article

  • What good open source programs exist for fuzzing popular image file types?

    - by JohnnySoftware
    I am looking for a free, open source, portable fuzzing tool for popular image file types that is written in either Java, Python, or Jython. Ideally, it would accept specifications for the fuzzable fields using some kind of declarative constraints. Non-procedural grammar for specifying constraints are greatly preferred. Otherwise, might as well write them all in Python or whatever. Just specifying ranges of valid values or expressions for them. Ideally, it would support some kind of generative programming to export the fuzzer into various programming languages to suit cases where more customization was required. If it supported a direct-manipulation GUI for controlling parameter values and ranges, that would be nice too. The file formats that should be supported are: GIF JPEG PNG So basically, it should be sort of a toolkit consisting of ready-to-run utility, a framework or library, and be capable of generating the fuzzed files directly as well as from programs it generates. It needs to be simple so that test images can be created quickly. It should have a batch capability for creating a series of images. Creating just one at a time would be too painful. I do not want a hacking tool, just a QA tool. Basically, I just want to address concerns that it is taking too long to get commonplace image rendering/parsing libraries stable and trustworthy.

    Read the article

  • What are Sharepoint(MOSS 2007) Developement/Deployment best practices.

    - by Satish
    We are deploying sharepoint MOSS 2007 at our work. I'm trying to come up with a sharepoint development and deployment methodology. We have Dev/QA/Prod environments and I need a way, preferably automated to deploy changes from Dev to QA and from there to prod. We are creating site collections web parts etc. Some of it is done directly within sharepoint, some through Sharepoint designer or visual studio. I'm looking for a way to extract this and deploy it to other enviornments. I tried stsadm backup/restore import/export etc but they all move the data along with it as well. I just need the structure deployed. Content deployment paths and jobs does the same thing as well. We use MSBuild & Curisecontrol.net for other .net projects to automate build/deployment process. I'm looking for something similar with sharepoint if possible. What are your best practices for this? Since my team is learning we don't have a defined process and we are open to change our development process if needed.

    Read the article

  • Eclipse sync workspaces/perspectives/preferences across computers

    - by Naren
    I have a project I need to be working on from two different computers, at work and at home. I need to be able to work on the code from both computers, so the issue is two fold; Sharing the code Sharing the workspace. 1 is simple enough with svn; but I feel icky committing broken code to svn just so I can access that again from home. I can live with this but is there a better option? To elaborate more on 2. I have a highly customized eclipse setup on one of the computers where I spent hours adding plugins and tweaking every tiny config options I could access to get it to the point where it is just right. It'll be a pain redoing every single change on the other computer, is there some way to automatically sync that? I know I can export preferences from Eclipse and import them, but I don't want to have to manually do that each time I change something. [Also, I don't think exporting preferences also exports perspectives?] Both computers run windows.

    Read the article

  • pythonpath issue? "python2.5: can't open file 'dev_appserver.py': [Errno 2] No such file or director

    - by Linc
    I added this line to my .bashrc (Ubuntu 9.10): export PYTHONPATH=/opt/google_appengine/ And then I ran the dev_appserver through python2.5 on Ubuntu like this: $ python2.5 dev_appserver.py guestbook/ python2.5: can't open file 'dev_appserver.py': [Errno 2] No such file or directory As you can see, it can't find dev_appserver.py even though it's in my /opt/google_appengine/ directory. Just to make sure it's not a permissions issue I did this: sudo chmod a+rwx dev_appserver.py To check whether it's been added to the system path for python2.5 I did this: $ python2.5 Python 2.5.5 (r255:77872, Apr 29 2010, 23:59:20) [GCC 4.4.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> for line in sys.path: print line ... /usr/local/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg /opt/google_appengine/demos /opt/google_appengine /usr/local/lib/python25.zip ... The directory shows up in this list so I don't understand why it can't be found when I type: $ python2.5 dev_appserver.py guestbook/ I'm new to Python so I would appreciate any help. Thanks.

    Read the article

  • Putting together CSV Cells in Excel with a macro

    - by Eric Kinch
    So, I have a macro to export data into CSV format and it's working great (Code at the bottom). The problem is the data I am putting into it. When I put the data in question in it comes out Firstname,Lastname,username,password,description I'd like to change it so I get Firstname Lastname,Firstname,Lastname,username,password,description What I'd like to do is manipulate my existing macro so to accomplish this. I'm not so good at VBS so any input or a shove in the right direction would be fantastic. Thanks! Sub Make_CSV() Dim sFile As String Dim sPath As String Dim sLine As String Dim r As Integer Dim c As Integer r = 1 'Starting row of data sPath = "C:\CSVout\" sFile = "MyText_" & Format(Now, "YYYYMMDD_HHMMSS") & ".CSV" Close #1 Open sPath & sFile For Output As #1 Do Until IsEmpty(Range("A" & r)) 'You can also Do Until r = 17 (get the first 16 cells) sLine = "" c = 1 Do Until IsEmpty(Cells(1, c)) 'Number of Columns - You could use a FOR / NEXT loop instead sLine = sLine & """" & Replace(Cells(r, c), ";", ":") & """" & "," c = c + 1 Loop Print #1, Left(sLine, Len(sLine) - 1) 'Remove the trailing comma r = r + 1 Loop Close #1 End Sub

    Read the article

  • Google MapView doesn't work after signed the app.

    - by user164589
    Hi guys, I am facing to android application signing problem. My application contains Google MapView. When I compile the app and run on the emulator, MapView works fine. But signed the app, MapView doesn't work. I've get Google Map API. This works on the simulator. I could sign the app once 2 months ago. Then I've upgraded the app. Now I need to sign the app again. Actually I don't know why signed app's mapView doesn't work. How to fix it ? Please advice. I used following steps when sign the app: Run Eclipse. Select the project. Right Click - Android Tools - Export Signed Application Package - Then Filled forms. (In forms, Validity years: 200, and all passwords are same.) Can you suggest me ? Thanks in advance.

    Read the article

  • SSRS code variable resetting on new page

    - by edmicman
    In SSRS 2008 I am trying to maintain a SUM of SUMs on a group using custom Code. The reason is that I have a table of data, grouped and returning SUMs of the data. I have a filter on the group to remove lines where group sums are zero. Everything works except I'm running into problems with the group totals - it should be summing the visible group totals but is instead summing the entire dataset. There's tons of articles about how to work around this, usually using custom code. I've made custom functions and variables to maintain a counter: Public Dim GroupMedTotal as Integer Public Dim GrandMedTotal as Integer Public Function CalcMedTotal(ThisValue as Integer) as Integer GroupMedTotal = GroupMedTotal + ThisValue GrandMedTotal = GrandMedTotal + ThisValue Return ThisValue End Function Public Function ReturnMedSubtotal() as Integer Dim ThisValue as Integer = GroupMedTotal GroupMedTotal = 0 Return ThisValue End Function Basically CalcMedTotal is fed a SUM of a group, and maintains a running total of that sum. Then in the group total line I output ReturnMedSubtotal which is supposed to give me the accumulated total and reset it for the next group. This actually works great, EXCEPT - it is resetting the GroupMedTotal value on each page break. I don't have page breaks explicitly set, it's just the natural break in the SSRS viewer. And if I export the results to Excel everything works and looks correctly. If I output Code.GroupMedTotal on each group row, I see it count correctly, and then if a group spans multiple pages on the next page GroupMedTotal is reset and begins counting from zero again. Any help in what's going on or how to work around this? Thanks!

    Read the article

  • Save PowerPoint slides as images

    - by nihi_l_ist
    I want to save some presentation file as images into some directory and use the code: OpenFileDialog o = new OpenFileDialog(); o.ShowDialog(); if (o.FileName != null) { ApplicationClass pptApplication = new ApplicationClass(); Presentation pptPresentation = pptApplication.Presentations.Open(o.FileName, MsoTriState.msoFalse, MsoTriState.msoFalse, MsoTriState.msoFalse); FolderBrowserDialog fb = new FolderBrowserDialog(); fb.ShowDialog(); for (int i = 0; i < pptPresentation.Slides.Count; i++) { pptPresentation.Slides[i].Export(fb.SelectedPath + "Slide" + (i + 1) + ".jpg", "jpg", 320, 240); } } use namespaces: using Microsoft.Office.Core; using Microsoft.Office.Interop.PowerPoint; But i get the following error: 'Microsoft.Office.Interop.PowerPoint.ApplicationClass' does not contain a definition for 'Presentations' and no extension method 'Presentations' accepting a first argument of type 'Microsoft.Office.Interop.PowerPoint.ApplicationClass' could be found (are you missing a using directive or an assembly reference?) Tho it seems that it has property called Presentations.. What am i doing incorrectly?

    Read the article

  • convert jruby 1.8 string to windows encoding?

    - by Arne
    Hey, I want to export some data from my jruby on rails webapp to excel, so I create a csv string and send it as a download to the client using send_data(text, :filename => "file.csv", :type => "text/csv; charset=CP1252", :encoding => "CP1252") The file seems to be in UTF-8 which Excel cannot read correctly. I googled the problem and found that iconv can convert encodings. I try to do that with: ic = Iconv.new('CP1252', 'UTF-8') text = ic.iconv(text) but when I send the converted text it does not make any difference. It is still UTF-8 and Excel cannot read the special characters. there are several solutions using iconv, so this seems to work for others. When I convert the file on the linux shell manually with iconv it works. What am I doing wrong? Is there a better way? Im using: - jruby 1.3.1 (ruby 1.8.6p287) (2009-06-15 2fd6c3d) (Java HotSpot(TM) Client VM 1.6.0_19) [i386-java] - Debian Lenny - Glassfish app server - Iceweasel 3.0.6

    Read the article

  • How to accept confirmation Automatically in PowerShell for Outlook

    - by user2919845
    How to accept confirmation Automatically in PowerShell for Outlook I have script for Export attachments from email from Outlook - see next It works correctly on one PC, but on another PC is there a problem: Outlook gives message and wants answer: Permit Denny Help If I manually click on Permit or Denny it works correctly. I want to automate it. Can you give me some suggestion how to do it in PowerShell? I have tried to set Outlook to not give this message but I didn’t success. My script: # <-- Script ---------> # script works with outlook Inbox folder # check if email have attachments with ".txt" and save those attachments to $filepath # path for exported files - attachments $filepath = "d:\Exported_files\" # create object outlook $o = New-Object -comobject outlook.application $n = $o.GetNamespace("MAPI") # $f - folder „dorucena posta“ 6 - Inbox $f = $n.GetDefaultFolder(6) # 6 - Inbox # select newest 10 emails, from it olny this one with attachments $f.Items| select -last 10| Where {$_.Attachments}| foreach { # process only unreaded mail if($_.unread -eq $True) { # processed mail set as read, not to process this mail again next day $_.unread = $False $SenderName = $_.SenderName Write-Host "Email from: ", $SenderName # process all attachments $_.attachments|foreach { $a = $_.filename If ($a.Contains(".txt")) { Write-Host $SenderName," ", $a # copy *.txt attachments to folder $filepath $_.saveasfile((Join-Path $filepath "$a")) } } } } Write-Host "Finish" # <------ End Script ---------------------------------->

    Read the article

  • extra new lines with several outputStream.write

    - by Sam
    Hi All, I am writing jsp to export data in excel format to user. An excel could be recieved on the cient side. However, since there's large amount of data, and I don't want to keep it in the server memory and write them at the end. I try to divide them and write serveral times. However, each extra write(..) will cause an extra new lines at the top of the excel worksheet and then the extra data is placed after these new lines. Does anyone know the reasons? The code is something like this: response.setHeader("Content-disposition","attachment;filename=DocuShareSearch.xls"); response.setHeader("Content-Type", "application/octet-stream"); responseContent ="<table><tr><td>12131</td></tr>......."; byte[] responseByte1 = responseContent.getBytes("utf-16"); outputStream.write(responseByte1, 0, responseByte1.length ); responseContent =".....<tr><td>12131</td></tr></table>"; byte[] responseByte2 = responseContent.getBytes("utf-16"); outputStream.write(responseByte2, 0, responseByte2.length ); outputStream.close();

    Read the article

  • MEF GetExports<T, TMetaDataView> returning nothing with AllowMultiple = True

    - by sohum
    I don't understand MEF very well, so hopefully this is a simple fix of how I think it works. I'm trying to use MEF to get some information about a class and how it should be used. I'm using the Metadata options to try to achieve this. My interfaces and attribute looks like this: public interface IMyInterface { } public interface IMyInterfaceInfo { Type SomeProperty1 { get; } double SomeProperty2 { get; } string SomeProperty3 { get; } } [MetadataAttribute] [AttributeUsage(AttributeTargets.Class, AllowMultiple = true)] public class ExportMyInterfaceAttribute : ExportAttribute, IMyInterfaceInfo { public ExportMyInterfaceAttribute(Type someProperty1, double someProperty2, string someProperty3) : base(typeof(IMyInterface)) { SomeProperty1 = someProperty1; SomeProperty2 = someProperty2; SomeProperty3 = someProperty3; } public Type SomeProperty1 { get; set; } public double SomeProperty2 { get; set; } public string SomeProperty3 { get; set; } } The class that is decorated with the attribute looks like this: [ExportMyInterface(typeof(string), 0.1, "whoo data!")] [ExportMyInterface(typeof(int), 0.4, "asdfasdf!!")] public class DecoratedClass : IMyInterface { } The method that is trying to use the import looks like this: private void SomeFunction() { // CompositionContainer is an instance of CompositionContainer var myExports = CompositionContainer.GetExports<IMyInterface, IMyInterfaceInfo>(); } In my case myExports is always empty. In my CompositionContainer, I have a Part in my catalog that has two ExportDefinitions, both with the following ContractName: "MyNamespace.IMyInterface". The Metadata is also loaded correctly per my exports. If I remove the AllowMultiple setter and only include one exported attribute, the myExports variable now has the single export with its loaded metadata. What am I doing wrong?

    Read the article

  • How to get OCI lib to work on red hat machine to work with R Oracle?

    - by Matt Bannert
    I need to get OCI lib working on my rhel 6.3 machine and I am experiencing some trouble with OCI headers files that can't be found. I have installed (using yum install) oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm because this official page it's all I need to run OCI. To test the whole thing in general I've installed sqplus64, which worked after I set export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib. Unfortunately the headers files couldn't be found after setting LD_LIBRARY_PATH. Actually I am not surprised because there is no include directory in any of these oracle paths. So the question is: Where do I get these missing header files from? Are they actually already there and I just can find them? Btw: I am doing this whole exercise because I want to use ROracle on my R Studio server and this R package depends on the OCI library. Once I am back in R territory the road gets much less bumpier for me. EDIT: this documentation helped me a little further. However, I guess I found some header files now in: "/usr/include/oracle/11.2/client64". But which variable do I have to set to this location?

    Read the article

  • How to force inclusion of an object file in a static library when linking into executable?

    - by Brian Bassett
    I have a C++ project that due to its directory structure is set up as a static library A, which is linked into shared library B, which is linked into executable C. (This is a cross-platform project using CMake, so on Windows we get A.lib, B.dll, and C.exe, and on Linux we get libA.a, libB.so, and C.) Library A has an init function (A_init, defined in A/initA.cpp), that is called from library B's init function (B_init, defined in B/initB.cpp), which is called from C's main. Thus, when linking B, A_init (and all symbols defined in initA.cpp) is linked into B (which is our desired behavior). The problem comes in that the A library also defines a function (Af, defined in A/Afort.f) that is intended to by dynamically loaded (i.e. LoadLibrary/GetProcAddress on Windows and dlopen/dlsym on Linux). Since there are no references to Af from library B, symbols from A/Afort.o are not included into B. On Windows, we can artifically create a reference by using the pragma: #pragma comment (linker, "/export:_Af") Since this is a pragma, it only works on Windows (using Visual Studio 2008). To get it working on Linux, we've tried adding the following to A/initA.cpp: extern void Af(void); static void (*Af_fp)(void) = &Af; This does not cause the symbol Af to be included in the final link of B. How can we force the symbol Af to be linked into B?

    Read the article

  • How do I rescue a small portion of data from a SQL Server database backup?

    - by Greg
    I have a live database that had some data deleted from it and I need that data back. I have a very recent copy of that database that has already been restored on another machine. Unrelated changes have been made to the live database since the backup, so I do not want to wipe out the live database with a full restore. The data I need is small - just a dozen rows - but those dozen rows each have a couple rows from other tables with foreign keys to it, and those couple rows have god knows how many rows with foreign keys pointing to them, so it would be complicated to restore by hand. Ideally I'd be able to tell the backup copy of the database to select the dozen rows I need, and the transitive closure of everything that they depend on, and everything that depends on them, and export just that data, which I can then import into the live database without touching anything else. What's the best approach to take here? Thanks. Everyone has mentioned sp_generate_inserts. When using this, how do you prevent Identity columns from messing everything up? Do you just turn IDENTITY INSERT on?

    Read the article

  • Why does a Silverlight application show a blank browser screen when created from exported template?

    - by Edward Tanguay
    I created a silverlight app (without website) named TestApp, with one TextBox: <UserControl x:Class="TestApp.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480"> <Grid x:Name="LayoutRoot"> <TextBlock Text="this is a test"/> </Grid> </UserControl> I press F5 and see "this is a test" in my browser (firefox). I select File | Export Template | name it TestAppTemplate and save it. I create a new silverlight app based on the above template. The MainPage.xaml has the exact same XAML as above. I press F5 and see a blank screen in my browser. I look at the HTML source of both of these and they are identical. Everything I have compared in both projects is identical. What do I have to do so that a Silverlight application which is created from my exported template does not show a blank screen? (creating a WPF application from an exported template like this works fine)

    Read the article

  • Fix DB duplicate entries (MySQL bug)

    - by Silence
    I'm using MySQL 4.1. Some tables have duplicates entries that go against the constraints. When I try to group rows, MySQL doesn't recognise the rows as being similar. Example: Table A has a column "Name" with the Unique proprety. The table contains one row with the name 'Hach?' and one row with the same name but a square at the end instead of the '?' (which I can't reproduce in this textfield) A "Group by" on these 2 rows return 2 separate rows This cause several problems including the fact that I can't export and reimport the database. On reimporting an error mentions that a Insert has failed because it violates a constraint. In theory I could try to import, wait for the first error, fix the import script and the original DB, and repeat. In pratice, that would take forever. Is there a way to list all the anomalies or force the database to recheck constraints (and list all the values/rows that go against them) ? I can supply the .MYD file if it can be helpful.

    Read the article

  • Android - Adding external library to project

    - by mmontalbo
    Hi, I am having a lot of trouble adding the WEKA library to a project I am working on. I have followed several tutorials that explain how to do this including the Android Developers guide: http://developer.android.com/guide/appendix/faq/commontasks.html#addexternallibrary and several of the postings on SO. I have created a folder in my project with the weka.jar file, created a new library (adding the weka.jar file to the library) and included this library in my build path. I have also added the library under the "Order and Export" tab in the project properties. I have also tried importing the jar file so that the actual contents of the jar are extracted into a directory in my project. The end result of all of this is that my project is able to build correctly and without error, but when it comes time to run my code on the emulator I get the following exception: 04-10 22:52:21.051: ERROR/dalvikvm(582): Could not find class 'weka.classifiers.trees.J48', referenced from method edu.usc.student.composure.classifier.GaitClassifierImpl. with J48 being the class I reference in my code. Does anyone have any additional suggestions that I may have overlooked? Thanks!

    Read the article

  • Open an Emacs buffer when a command tries to open an editor in shell-mode

    - by Chris Conway
    I like to use Emacs' shell mode, but it has a few deficiencies. One of those is that it's not smart enough to open a new buffer when a shell command tries to invoke an editor. For example with the environment variable VISUAL set to vim I get the following from svn propedit: $ svn propedit svn:externals . "svn-prop.tmp" 2L, 149C[1;1H ~ [4;1H~ [5;1H~ [6;1H~ [7;1H~ ... (It may be hard to tell from the representation, but it's a horrible, ugly mess.) With VISUAL set to "emacs -nw", I get $ svn propedit svn:externals . emacs: Terminal type "dumb" is not powerful enough to run Emacs. It lacks the ability to position the cursor. If that is not the actual type of terminal you have, use the Bourne shell command `TERM=... export TERM' (C-shell: `setenv TERM ...') to specify the correct type. It may be necessary to do `unset TERMINFO' (C-shell: `unsetenv TERMINFO') as well.svn: system('emacs -nw svn-prop.tmp') returned 256 (It works with VISUAL set to just emacs, but only from inside an Emacs X window, not inside a terminal session.) Is there a way to get shell mode to do the right thing here and open up a new buffer on behalf of the command line process?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >