Search Results

Search found 6242 results on 250 pages for 'named pipe'.

Page 208/250 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • How to get associated URLRequest from Event.COMPLETE fired by URLLoader

    - by matt lohkamp
    So let's say we want to load some XML - var xmlURL:String = 'content.xml'; var xmlURLRequest:URLRequest = new URLRequest(xmlURL); var xmlURLLoader:URLLoader = new URLLoader(xmlURLRequest); xmlURLLoader.addEventListener(Event.COMPLETE, function(e:Event):void{ trace('loaded',xmlURL); trace(XML(e.target.data)); }); If we need to know the source URL for that particular XML doc, we've got that variable to tell us, right? Now let's imagine that the xmlURL variable isn't around to help us - maybe we want to load 3 XML docs, named in sequence, and we want to use throwaway variables inside of a for-loop: for(var i:uint = 3; i > 0; i--){ var xmlURLLoader:URLLoader = new URLLoader(new URLRequest('content'+i+'.xml')); xmlURLLoader.addEventListener(Event.COMPLETE, function(e:Event):void{ trace(e.target.src); // I wish this worked... trace(XML(e.target.data)); }); } Suddenly it's not so easy, right? I hate that you can't just say e.target.src or whatever - is there a good way to associate URLLoaders with the URL they loaded data from? Am I missing something? It feels unintuitive to me.

    Read the article

  • PHP Form Validation

    - by JM4
    This question will undoubtedly be difficult to answer and ask in a way that makes sense but I'll try my best: I have a form which uses PHP to display certain sections of the form such as: <?php if ($_SESSION['EnrType'] == "Individual") { display only form information for individual enrollment } ?> and <?php if ($_SESSION['Num_Enrs'] > 6) { display only form information for 7 total members enrollment } ?> In each form piece, unique information is collected about each enrollee but the basic criteria for each enrollee is the same, i.e. All enrollee's must use have a value in the FirstName field. Each field is named according to the enrollee number, i.e. Num1FirstName; Num2FirstName. I have a PHP validation script which is absolutely fantastic and am not looking to change it but the issue I am running into is duplication of the script in order to validate ALL fields in one swoop. On submission, all POSTED items are run through my validation script and based on the rules set return an error if they do not equal true. Sample code: if (isset($_POST['submit'])) { // import the validation library require("validation.php"); $rules = array(); // stores the validation rules //All Enrollee Rules $rules[] = "required,Num1FirstName,Num2FirstName,The First Name field is required."; The script above does the following, $rules[] ="REQUIREMENT,fieldname,error message" where requirement gives criteria (in this case, simply that a value is passed), fieldname is the name of the field being validated, and error message returns the error used. My Goal is to use the same formula above and have $rules[] run through ALL firstnames and return the error posted ONLY if they exist (i.e. dont check for member #7's first name if it doesnt exist on the screen). If I simply put a comma between the 'fieldnames' this only checks for the first, then second, and so on so this wont work. Any ideas?

    Read the article

  • Loading a helper elsewhere than the autoload.php?

    - by drpcken
    I inherited a project and I'm cleaning up a bit and trying to finish it. I noticed that they used (or wrote) a breadcrumb helper. It is in my helpers folder and is named breadcrumb_helper.php It has a single function to build a breadcrumb menu with links and pass it to the view breadcrumbs.php. Here's the code: function show_breadcrumbs() { $ci =& get_instance(); $ci->load->helper('inflector'); $data = ''; //build breadcrumb and store in $data $this->load->view("breadcrumbs", $data) } I was trying to figure out how this helper worked and I checked the autoload.php but there is no reference to the helper in there. In face here is my autoload: $autoload['helper'] = array('url','asset','combine','navigation','form','portfolio','cookie','default'); This show_breadcrumbs() function is used quite a bit in some of my pages so I'm confused as to how its loading if it isn't in the autoloader. It is called like this in a few of my pages: <?=show_breadcrumbs()?> What am I missing? Why isn't this in my autoload? I even did a global search and couldn't find anywhere the helper is being loaded.

    Read the article

  • What makes deployment successful for some users and unsuccessful for others?

    - by Julien
    I am trying to deploy a Visual C++ application (developed with Microsoft Visual Studio 2008) using a Setup and Deployment Project. After installation, users on some target computers get the following error message after launching the application executable: “This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix the problem.” Another user after installation could run the application properly. I cannot find the root cause of this problem, despite spending several hours on the Visual Studio help files and online forums (most postings date back to 2006). Does anyone at Stack Overflow have a suggestion? Thanks in advance. Additional details appear below. The application uses FLTK 1.1.9 for a GUI library, as well as some Boost 1.39 libraries (regex, lexical_cast, date_time, math). I made sure I am trying to deploy the release version (not the debug version) of the application. The Runtime library in the Code Generation settings is Multi-threaded DLL (/MD). The dependency walker of myapp.exe lists the following DLLs: wsock32.dll, comctl32.dll, kernel32.dll, user32.dll, gdi32.dll, shell32.dll, ole32.dll, mvcp90.dll, msvcr90.dll. In the Setup and Deployment Project, I add the following DLLs to the File System on Target Machine: fltkdlld.dll, and a folder named Microsoft.VC90.CRT with msvcm90.dll, msvcp90.dll, mcvcr90.dll and Microsoft.VC90.CRT.manifest. The installation process on the target computers getting the error message requires having the .Net Framework 3.5 installed first. Any suggestion? Thanks in advance!

    Read the article

  • Convert Excel Range to ADO.NET DataSet or DataTable, etc.

    - by htbrady
    I have an Excel spreadsheet that will sit out on a network share drive. It needs to be accessed by my Winforms C# 3.0 application (many users could be using the app and hitting this spreadsheet at the same time). There is a lot of data on one worksheet. This data is broken out into areas that I have named as ranges. I need to be able to access these ranges individually, return each range as a dataset, and then bind it to a grid. I have found examples that use OLE and have got these to work. However, I have seen some warnings about using this method, plus at work we have been using Microsoft.Office.Interop.Excel as the standard thus far. I don't really want to stray from this unless I have to. Our users will be using Office 2003 on up as far as I know. I can get the range I need with the following code: MyDataRange = (Microsoft.Office.Interop.Excel.Range) MyWorkSheet.get_Range("MyExcelRange", Type.Missing); The OLE way was nice as it would take my first row and turn those into columns. My ranges (12 total) are for the most part different from each other in number of columns. Didn't know if this info would affect any recommendations. Is there any way to use Interop and get the returned range back into a dataset?

    Read the article

  • Assemblies mysteriously loaded into new AppDomains

    - by Eric
    I'm testing some code that does work whenever assemblies are loaded into an appdomain. For unit testing (in VS2k8's built-in test host) I spin up a new, uniquely-named appdomain prior to each test with the idea that it should be "clean": [TestInitialize()] public void CalledBeforeEachTestMethod() { AppDomainSetup appSetup = new AppDomainSetup(); appSetup.ApplicationBase = @"G:\<ProjectDir>\bin\Debug"; Evidence baseEvidence = AppDomain.CurrentDomain.Evidence; Evidence evidence = new Evidence( baseEvidence ); _testAppDomain = AppDomain.CreateDomain( "myAppDomain" + _appDomainCounter++, evidence, appSetup ); } [TestMethod] public void MissingFactoryCausesAppDomainUnload() { SupportingClass supportClassObj = (SupportingClass)_testAppDomain.CreateInstanceAndUnwrap( GetType().Assembly.GetName().Name, typeof( SupportingClass ).FullName ); try { supportClassObj.LoadMissingRegistrationAssembly(); Assert.Fail( "Should have nuked the app domain" ); } catch( AppDomainUnloadedException ) { } } [TestMethod] public void InvalidFactoryMethodCausesAppDomainUnload() { SupportingClass supportClassObj = (SupportingClass)_testAppDomain.CreateInstanceAndUnwrap( GetType().Assembly.GetName().Name, typeof( SupportingClass ).FullName ); try { supportClassObj.LoadInvalidFactoriesAssembly(); Assert.Fail( "Should have nuked the app domain" ); } catch( AppDomainUnloadedException ) { } } public class SupportingClass : MarshalByRefObject { public void LoadMissingRegistrationAssembly() { MissingRegistration.Main(); } public void LoadInvalidFactoriesAssembly() { InvalidFactories.Main(); } } If every test is run individually I find that it works correctly; the appdomain is created and has only the few intended assemblies loaded. However, if multiple tests are run in succession then each _testAppDomain already has assemblies loaded from all previous tests. Oddly enough, the two tests get appdomains with different names. The test assemblies that define MissingRegistration and InvalidFactories (two different assemblies) are never loaded into the unit test's default appdomain. Can anyone explain this behavior?

    Read the article

  • Dynamic loading of shared objects using dlopen()

    - by Andy
    Hi, I'm working on a plain X11 app. By default, my app only requires libX11.so and the standard gcc C and math libs. My app has also support for extensions like Xfixes and Xrender and the ALSA sound system. But this feature shall be made optional, i.e. if Xfixes/Xrender/ALSA is installed on the host system, my app will offer extended functionality. If Xfixes or Xrender or ALSA is not there, my app will still run but some functionality will not be available. To achieve this behaviour, I'm not linking dynamically against -lXfixes, -lXrender and -lasound. Instead, I'm opening these libraries manually using dlopen(). By doing it this way, I can be sure that my app won't fail in case one of these optional components is not present. Now to my question: What library names should I use when calling dlopen()? I've seen that these differ from distro to distro. For example, on openSUSE 11, they're named the following: libXfixes.so libXrender.so libasound.so On Ubuntu, however, the names have a version number attached, like this: libXfixes.so.3 libXrender.so.1 libasound.so.2 So trying to open "libXfixes.so" would fail on Ubuntu, although the lib is obviously there. It just has a version number attached. So how should my app handle this? Should I let my app scan /usr/lib/ first manually to see which libs we have and then choose an appropriate one? Or does anyone have a better idea? Thanks guys, Andy

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • WPF: How to bind and update display with DataContext

    - by Am
    I'm trying to do the following thing: I have a TabControl with several tabs. Each TabControlItem.Content points to PersonDetails which is a UserControl Each BookDetails has a dependency property called IsEditMode I want a control outside of the TabControl , named ToggleEditButton, to be updated whenever the selected tab changes. I thought I could do this by changing the ToggleEditButton data context, by it doesn't seem to work (but I'm new to WPF so I might way off) The code changing the data context: private void tabControl1_SelectionChanged(object sender, SelectionChangedEventArgs e) { if (e.Source is TabControl) { if (e.Source.Equals(tabControl1)) { if (tabControl1.SelectedItem is CloseableTabItem) { var tabItem = tabControl1.SelectedItem as CloseableTabItem; RibbonBook.DataContext = tabItem.Content as BookDetails; ribbonBar.SelectedTabItem = RibbonBook; } } } } The DependencyProperty under BookDetails: public static readonly DependencyProperty IsEditModeProperty = DependencyProperty.Register("IsEditMode", typeof (bool), typeof (BookDetails), new PropertyMetadata(true)); public bool IsEditMode { get { return (bool)GetValue(IsEditModeProperty); } set { SetValue(IsEditModeProperty, value); SetValue(IsViewModeProperty, !value); } } And the relevant XAML: <odc:RibbonTabItem Title="Book" Name="RibbonBook"> <odc:RibbonGroup Title="Details" Image="img/books2.png" IsDialogLauncherVisible="False"> <odc:RibbonToggleButton Content="Edit" Name="ToggleEditButton" odc:RibbonBar.MinSize="Medium" SmallImage="img/edit_16x16.png" LargeImage="img/edit_32x32.png" Click="Book_EditDetails" IsChecked="{Binding Path=IsEditMode, Mode=TwoWay}"/> ... There are two things I want to accomplish, Having the button reflect the IsEditMode for the visible tab, and have the button change the property value with no code behind (if posible) Any help would be greatly appriciated.

    Read the article

  • OleDB connection only working when debugging

    - by Francesc
    I have a C# application that connects to a named SQL Express instance on the local machine using OleDBConnection: _connection = new OleDbConnection(_strConn); _connection.Open(); _strConn is something like this: "Provider=sqloledb;Data Source=.\NAMEDINSTANCE;Initial Catalog=dbname;User Id=sa;Password=password;" If I debug the application, the connection works fine. If I run the application from Windows Explorer (the same debug compilation), I get an "OleDBException: Login timeout expired" in the Open() line after 30 seconds. The strange thing is the exception happens even if I attach the debugger to the exe. I can see that the connection string is correct and everything seems fine. I can't fine any extra information in the SQL Express error log or SQL Activity Monitor either. If it helps, here is the exception: System.Data.OleDb.OleDbException: Login timeout expired at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection) at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OleDb.OleDbConnection.Open() I imagine that find the issue with the information I give here might be difficult, but I don't know where else to look or what other tests to do, so any ideas on what it could be or what test I could do to find out will be really appreciated.

    Read the article

  • SQL query recursion for a web-like structure

    - by MickeyD
    I have a table here, named "Foo". The data is set up something like this. ID TableReference DataId0 DataId1 DataId2 -- -------------- ------- ------- ------- 1 Prize 3 4 5 2 Prize 4 5 NULL 3 Cash 1 NULL NULL 4 Prize 8 NULL 12 5 Foo 2 3 NULL 6 Cash 8 1 10 7 Foo 5 1 2 Etc. The data is horribly set up, I know, but I didn't set it up that way. :) I'm only dealing with the after effect. I'm trying to come up with a way to essentially "flatten" the table; that is, to display all the data to a point where the table "Foo" does not reference itself. I'm trying to figure out a sql query that I can do to get there. Usually when I deal with recursion, I have (or can establish) parent IDs and set it up that way, but for this table there are seemingly multiple child and parent IDs creating a web-like structure instead of a hierarchy. So I'm at a loss where to even begin to write a sql query for something like this. Note: There is no infinite looping (where one Foo points to another Foo, which points back to the original Foo) from what I've found. Using t-sql. Thanks for any assistance, if at all possible.

    Read the article

  • Using Sphinx with a distutils-built C extension

    - by detly
    I have written a Python module including a submodule written in C: the module itself is called foo and the C part is foo._bar. The structure looks like: src/ foo/__init__.py <- contains the public stuff foo/_bar/bar.c <- the C extension doc/ <- Sphinx configuration conf.py ... foo/__init__.py imports _bar to augment it, and the useful stuff is exposed in the foo module. This works fine when it's built, but obviously won't work in uncompiled form, since _bar doesn't exist until it's built. I'd like to use Sphinx to document the project, and use the autodoc extension on the foo module. This means I need to build the project before I can build the documentation. Since I build with distutils, the built module ends up in some variably named dir build/lib.linux-ARCH-PYVERSION — which means I can't just hard-code the directory into a Sphinx' conf.py. So how do I configure my distutils setup.py script to run the Sphinx builder over the built module? For completeness, here's the call to setup (the 'fake' things are custom builders that subclass build and build_ext): setup(cmdclass = { 'fake': fake, 'build_ext_fake' : build_ext_fake }, package_dir = {'': 'src'}, packages = ['foo'], name = 'foo', version = '0.1', description = desc, ext_modules = [module_real])

    Read the article

  • How to obtain the panel within a treeview (WPF)

    - by sperling
    How can one obtain the panel that is used within a TreeView? I've read that by default TreeView uses a VirtualizingStackPanel for this. When I look at a TreeView template, all I see is <ItemsPresenter />, which seems to hide the details of what panel is used. Possible solutions: 1) On the treeview instance ("tv"), from code, do this: tv.ItemsPanel. The problem is, this does not return a panel, but an ItemsPanelTemplate ("gets or sets the template that defines the panel that controls the layout of the items"). 2) Make a TreeView template that explicitly replaces <ItemsPresenter /> with your own ItemsControl.ItemsPanel. I am providing a special template anyways, so this is fine in my scenario. Then give a part name to the panel that you place within that template, and from code you can obtain that part (i.e. the panel). The problem with this? see below. (I am using a control named VirtualTreeView which is derived from TreeView, as is seen below): , use following: -- [sorry folks about poor formatting here, this is my first post, I tried 4 spaces for code... doesn't seem to work?] [I stripped out all clutter here for visibility...] The problem with this is: this immediately overrides any TreeView layout mechanism. Actually, you just get a blank screen, even when you have TreeViewItems filling the tree. Well, the reason I want to get a hold of the panel is to take some part in the MeaureOverride, but without going into all of that, I certainly do not want to rewrite the book of how to layout a treeview. I.e., doing this the step #2 way seems to invalidate the point of even using a TreeView in the first place. Sorry if there is some confusion here, thanks for any help you can offer.

    Read the article

  • Sql server query using function and view is slower

    - by Lieven Cardoen
    I have a table with a xml column named Data: CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](max) NOT NULL, [LastName] [nvarchar](max) NOT NULL, [Email] [nvarchar](250) NOT NULL, [Password] [nvarchar](max) NULL, [UserName] [nvarchar](250) NOT NULL, [LanguageId] [int] NOT NULL, [Data] [xml] NULL, [IsDeleted] [bit] NOT NULL,... In the Data column there's this xml <data> <RRN>...</RRN> <DateOfBirth>...</DateOfBirth> <Gender>...</Gender> </data> Now, executing this query: SELECT UserId FROM Users WHERE data.value('(/data/RRN)[1]', 'nvarchar(max)') = @RRN after clearing the cache takes (if I execute it a couple of times after each other) 910, 739, 630, 635, ... ms. Now, a db specialist told me that adding a function, a view and changing the query would make it much more faster to search a user with a given RRN. But, instead, these are the results when I execute with the changes from the db specialist: 2584, 2342, 2322, 2383, ... This is the added function: CREATE FUNCTION dbo.fn_Users_RRN(@data xml) RETURNS varchar(100) WITH SCHEMABINDING AS BEGIN RETURN @data.value('(/data/RRN)[1]', 'varchar(max)'); END; The added view: CREATE VIEW vwi_Users WITH SCHEMABINDING AS SELECT UserId, dbo.fn_Users_RRN(Data) AS RRN from dbo.Users Indexes: CREATE UNIQUE CLUSTERED INDEX cx_vwi_Users ON vwi_Users(UserId) CREATE NONCLUSTERED INDEX cx_vwi_Users__RRN ON vwi_Users(RRN) And then the changed query: SELECT UserId FROM Users WHERE dbo.fn_Users_RRN(Data) = '59021626919-61861855-S_FA1E11' Why is the solution with a function and a view going slower?

    Read the article

  • Linux network stack : adding protocols with an LKM and dev_add_pack

    - by agent0range
    Hello, I have recently been trying to familiarize myself with the Linux Networking stack and device drivers (have both similarly named O'Reilly books) with the eventual goal of offloading UDP. I have already implemented UDP on the NIC but now the hard part... Rather than ask for assistance on this larger goal I was hoping someone could clarify for me a particular snippet I found that is part of a LKM which registeres a new protocol (OTP) that acts as a filter between the device driver and network stack. http://www.phrack.org/archives/55/p55_0x0c_Building%20Into%20The%20Linux%20Network%20Layer_by_lifeline%20&%20kossak.txt (Note: this Phrack article contains three different modules, code for the OTP is at the bottom of the page) In the init function of his example he has: otp_proto.type = htons(ETH_P_ALL); otp_proto.func = otp_func; dev_add_pack(&otp_proto); which (if I understand correctly) should register otp_proto as a packet sniffer and put it into the ptype_all data structure. My question is about the dev_add_pack. Is it the case that the protocol being registered as a filter will always be placed at this layer between L2 and the device driver? Or, for instance could I make such a filtering occur between the application and transport layers (analyze socket parameters) using the same process? I apologize if this is confusing - I am having some trouble wrapping my head around the bigger picture when it comes to modules altering kernel stack functionality. Thanks

    Read the article

  • Castle, sharing a transient component between a decorator and a decorated component

    - by Marius
    Consider the following example: public interface ITask { void Execute(); } public class LoggingTaskRunner : ITask { private readonly ITask _taskToDecorate; private readonly MessageBuffer _messageBuffer; public LoggingTaskRunner(ITask taskToDecorate, MessageBuffer messageBuffer) { _taskToDecorate = taskToDecorate; _messageBuffer = messageBuffer; } public void Execute() { _taskToDecorate.Execute(); Log(_messageBuffer); } private void Log(MessageBuffer messageBuffer) {} } public class TaskRunner : ITask { public TaskRunner(MessageBuffer messageBuffer) { } public void Execute() { } } public class MessageBuffer { } public class Configuration { public void Configure() { IWindsorContainer container = null; container.Register( Component.For<MessageBuffer>() .LifeStyle.Transient); container.Register( Component.For<ITask>() .ImplementedBy<LoggingTaskRunner>() .ServiceOverrides(ServiceOverride.ForKey("taskToDecorate").Eq("task.to.decorate"))); container.Register( Component.For<ITask>() .ImplementedBy<TaskRunner>() .Named("task.to.decorate")); } } How can I make Windsor instantiate the "shared" transient component so that both "Decorator" and "Decorated" gets the same instance? Edit: since the design is being critiqued I am posting something closer to what is being done in the app. Maybe someone can suggest a better solution (if sharing the transient resource between a logger and the true task is considered a bad design)

    Read the article

  • How to Parse an XML (using XELement) having multiple Namespace ?

    - by Subhen
    Hi I , I get the followinng Xresponse after parsing the XML document: <DIDL-Lite xmlns="urn:schemas-upnp-org:metadata-1-0/DIDL-Lite/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:upnp="urn:schemas-upnp-org:metadata-1-0/upnp/" xmlns:dlna="urn:schemas-dlna-org:metadata-1-0/"> <item id="1182" parentID="40" restricted="1"> <title>Hot Issue</title> </item> As per the earlier thread, When there is a default namespace in the document, you must parse it as if it were a named namespace. For example. XNamespace ns = "urn:schemas-upnp-org:metadata-1-0/DIDL-Lite/"; var xDIDL = xResponse.Element(ns + "DIDL-Lite"); But in my case I have four different name space. I am not getting any results after using the following query , I am getting the response , Not Yeilding any results: XNamespace dc = "http://purl.org/dc/elements/1.1/"; var vAudioData = from xAudioinfo in xResponse.Descendants(ns + "DIDL-lite").Elements("item") select new RMSMedia { strAudioTitle = ((string)xAudioinfo.Element(dc + "title")).Trim(), }; I have no clue whats going on as am new to it. Please help

    Read the article

  • unable to set fields of a collection-property elements after changing their order (elements becoming

    - by Jaroslav Záruba
    Hello I want to change order of objects in a collection, and then to access+modify fields of those items. Unfortunately the items somehow become 'deleted'. This is what I do... if(someCondition) { MainEvent mainEvent = pm.getObjectById(MainEvent.class, mainEventKey); /* * events in the original order * MainEvent.subEvents field is not in default fetch group, * therefore I also tried to add the named group into the * persistenceManeger fetch plan, no difference * (mainEvent is not instance of the Event sub/class BTW) */ List<Event> subEvents = mainEvent.getSubEvents(); // re-arrange the events according to keysOrdered { Map<Key, Event> eventMap = new HashMap<Key, Event>(); for(Event event : subEvents) eventMap.put(event.getKey(), event); List<Event> eventsOrdered = new LinkedList<Event>(); for(Key eventKey : keysOrdered) eventsOrdered.add(eventMap.put(eventKey, eventMap.get(eventKey))); // } // put the re-arranged items back into the collection property { subEvents.clear(); subEvents.addAll(eventsOrdered); // } pm.makePersistent(mainEvent); eventsOrdered = subEvents; } else eventsOrdered = getEventsUsingAlternateApproach(); /* * so by now the mainEvent variable does not exist; * could it be this lead the persistence manager to mark * my events as abandoned/obsolete/invalid/deleted...? */ for(Event event : eventsOrdered) event.setDate(new Date()); // -> "Cannot write fields to a deleted object" What am I doing wrong please?

    Read the article

  • Weird camera Intent behavior

    - by David Erosa
    Hi all. I'm invoking the MediaStore.ACTION_IMAGE_CAPTURE intent with the MediaStore.EXTRA_OUTPUT extra so that it does save the image to that file. On the onActivityResult I can check that the image is being saved in the intended file, which is correct. The weird thing is that anyhow, the image is also saved in a file named something like "/sdcard/Pictures/Camera/1298041488657.jpg" (epoch time in which the image was taken). I've checked the Camera app source (froyo-release branch) and I'm almost sure that the code path is correct and wouldn't have to save the image, but I'm a noob and I'm not completly sure. AFAIK, the image saving process starts with this callback (comments are mine): private final class JpegPictureCallback implements PictureCallback { ... public void onPictureTaken(...){ ... // This is where the image is passed back to the invoking activity. mImageCapture.storeImage(jpegData, camera, mLocation); ... public void storeImage(final byte[] data, android.hardware.Camera camera, Location loc) { if (!mIsImageCaptureIntent) { // Am i an intent? int degree = storeImage(data, loc); // THIS SHOULD NOT BE CALLED WITHIN THE CAPTURE INTENT!! ....... // An finally: private int storeImage(byte[] data, Location loc) { try { long dateTaken = System.currentTimeMillis(); String title = createName(dateTaken); String filename = title + ".jpg"; // Eureka, timestamp filename! ... So, I'm receiving the correct data, but it's also being saved in the "storeImage(data, loc);" method call, which should not be called... It'd not be a problem if I could get the newly created filename from the intent result data, but I can't. When I found this out, I found about 20 image files from my tests that I didn't know were on my sdcard :) I'm getting this behavior both with my Nexus One with Froyo and my Huawei U8110 with Eclair. Could please anyone enlight me? Thanks a lot.

    Read the article

  • Reducing a set of non-unique elements via transformations

    - by Andrey Fedorov
    I have: 1) a "starting set" of not-necessarily-unique elements, e.x. { x, y, z, z }, 2) a set of transformations, e.x. (x,z) = y, (z,z) = z, x = z, y = x, and 3) a "target set" that I am trying to get by applying transformations to the starting set, e.x. { z }. I'd like to find an algorithm to generate the (possibly infinite) possible applications of the transformations to the starting set that result in the target set. For example: { x, y, z, z }, y => x { x, x, z, z }, x => z { z, x, z, z }, x => z { z, z, z, z }, (z, z) => z { z, z, z }, (z, z) => z { z, z }, (z, z) => z { z } This sounds like something that's probably an existing (named) problem, but I don't recognize it. Can anyone help me track it down, or suggest further reading on something similar?

    Read the article

  • MAPI find the contacts and calendar folder

    - by Rogier21
    In my outlook I have 1 exchange connection and 2 Personal Folders. I want to go fetch ALL items from the calendars and contacts so I use: /** * Create outlook application */ Outlook.Application oApp = new Outlook.Application(); Outlook.NameSpace oNS = oApp.GetNamespace("mapi"); oNS.Logon(Missing.Value, Missing.Value, true, true); /** * Loop through all the folders */ foreach (Outlook.MAPIFolder oFolder in oNS.Folders) { if (oFolder.Name == "Public Folders") { break; } /** * Get calendar items */ //Outlook.MAPIFolder oCalendar = oNS.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderCalendar); Outlook.MAPIFolder oCalendar = oFolder.Folders[5]; Outlook.Items oCalendarItems = oCalendar.Items; //Outlook.MAPIFolder oContacts = oNS.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderContacts); Outlook.MAPIFolder oContacts = oFolder.Folders[7]; Outlook.Items oContactItems = oContacts.Items; But this does not work oFolder.Folders[5] is not always 5 for the calendar, sometimes it's a different value. I cannot find the items by name oFolder.Folders["Calendar"]; because in Dutch the folder will be named "Agenda". Usually I use: Outlook.MAPIFolder oCalendar = oNS.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderCalendar); But then I only get the default calendar. How can I get the other calendars?

    Read the article

  • capturing CMD batch file parameter list; write to file for later processing

    - by BobB
    I have written a batch file that is launched as a post processing utility by a program. The batch file reads ~24 parameters supplied by the calling program, stores them into variables, and then writes them to various text files. Since the max input variable in CMD is %9, it's necessary to use the 'shift' command to repeatedly read and store these individually to named variables. Because the program outputs several similar batch files, the result is opening several CMD windows sequentially, assigning variables and writing data files. This ties up the calling program for too long. It occurs to me that I could free up the calling program much faster if maybe there's a way to write a very simple batch file that can write all the command parameters to a text file, where I can process them later. Basically, just grab the parameter list, write it and done. Q: Is there some way to treat an entire series of parameter data as one big text string and write it to one big variable... and then echo the whole big thing to one text file? Then later read the string into %n variables when there's no program waiting to resume? Parameter list is something like 25 - 30 words, less than 200 characters. Sample parameter list: "First Name" "Lastname" "123 Steet Name Way" "Cityname" ST 12345 1004968 06/01/2010 "Firstname+Lastname" 101738 "On Account" 20.67 xy-1z 1 8.95 3.00 1.39 0 0 239 8.95 Items in quotes are processed as string variables. List is space delimited. Any suggestions?

    Read the article

  • How do I view the full content of a text or varchar(MAX) column in SQL Server 2008 Management Studio

    - by adamjford
    In this live SQL Server 2008 (build 10.0.1600) database, there's an Events table, which contains a text column named Details. (Yes, I realize this should actually be a varchar(MAX) column, but whoever set this database up did not do it that way.) This column contains very large logs of exceptions and associated JSON data that I'm trying to access through SQL Server Management Studio, but whenever I copy the results from the grid to a text editor, it truncates it at 43679 characters. I've read on various locations on the Internet that you can set your Maximum Characters Retrieved for XML Data in Tools > Options > Query Results > SQL Server > Results To Grid to Unlimited, and then perform a query such as this: select Convert(xml, Details) from Events where EventID = 13920 (Note that the data is column is not XML at all. CONVERTing the column to XML is merely a workaround I found from Googling that someone else has used to get around the limit SSMS has from retrieving data from a text or varchar(MAX) column.) However, after setting the option above, running the query, and clicking on the link in the result, I still get the following error: Unable to show XML. The following error happened: Unexpected end of file has occurred. Line 5, position 220160. One solution is to increase the number of characters retrieved from the server for XML data. To change this setting, on the Tools menu, click Options. So, any idea on how to access this data? Would converting the column to varchar(MAX) fix my woes?

    Read the article

  • Object reference not set to an instance of an object

    - by MBTHQ
    Can anyone help with the following code? I'm trying to get data from the database colum to the datagridview... I'm getting error over here "Dim sql_1 As String = "SELECT * FROM item where item_id = '" + DataGridView_stockout.CurrentCell.Value.ToString() + "'"" Private Sub DataGridView_stockout_CellMouseClick(ByVal sender As Object, ByVal e As System.Windows.Forms.DataGridViewCellMouseEventArgs) Handles DataGridView_stockout.CellMouseClick Dim i As Integer = Stock_checkDataSet1.Tables(0).Rows.Count > 0 Dim thiscur_stok As New System.Data.SqlClient.SqlConnection("Data Source=MBTHQ\SQLEXPRESS;Initial Catalog=stock_check;Integrated Security=True") ' Sql Query Dim sql_1 As String = "SELECT * FROM item where item_id = '" + DataGridView_stockout.CurrentCell.Value.ToString() + "'" ' Create Data Adapter Dim da_1 As New SqlDataAdapter(sql_1, thiscur_stok) ' Fill Dataset and Get Data Table da_1.Fill(Stock_checkDataSet1, "item") Dim dt_1 As DataTable = Stock_checkDataSet1.Tables("item") If i >= DataGridView_stockout.Rows.Count Then 'MessageBox.Show("Sorry, DataGridView_stockout doesn't any row at index " & i.ToString()) Exit Sub End If If 1 >= Stock_checkDataSet1.Tables.Count Then 'MessageBox.Show("Sorry, Stock_checkDataSet1 doesn't any table at index 1") Exit Sub End If If i >= Stock_checkDataSet1.Tables(1).Rows.Count Then 'MessageBox.Show("Sorry, Stock_checkDataSet1.Tables(1) doesn't any row at index " & i.ToString()) Exit Sub End If If Not Stock_checkDataSet1.Tables(1).Columns.Contains("os") Then 'MessageBox.Show("Sorry, Stock_checkDataSet1.Tables(1) doesn't any column named 'os'") Exit Sub End If 'DataGridView_stockout.Item("cs_stockout", i).Value = Stock_checkDataSet1.Tables(0).Rows(i).Item("os") Dim ab As String = Stock_checkDataSet1.Tables(0).Rows(i)(0).ToString() End Sub I keep on getting the error saying "Object reference not set to an instance of an object" I dont know where I'm going wrong. Help really appreciated!!

    Read the article

  • Scope of "library" methods

    - by JS
    Hello, I'm apparently laboring under a poor understanding of Python scoping. Perhaps you can help. Background: I'm using the 'if name in "main"' construct to perform "self-tests" in my module(s). Each self test makes calls to the various public methods and prints their results for visual checking as I develop the modules. To keep things "purdy" and manageable, I've created a small method to simplify the testing of method calls: def pprint_vars(var_in): print("%s = '%s'" % (var_in, eval(var_in))) Calling pprint_vars with: pprint_vars('some_variable_name') prints: some_variable_name = 'foo' All fine and good. Problem statement: Not happy to just KISS, I had the brain-drizzle to move my handy-dandy 'pprint_vars' method into a separate file named 'debug_tools.py' and simply import 'debug_tools' whenever I wanted access to 'pprint_vars'. Here's where things fall apart. I would expect import debug_tools foo = bar debug_tools.pprint_vars('foo') to continue working its magic and print: foo = 'bar' Instead, it greets me with: NameError: name 'some_var' is not defined Irrational belief: I believed (apparently mistakenly) that import puts imported methods (more or less) "inline" with the code, and thus the variable scoping rules would remain similar to if the method were defined inline. Plea for help: Can someone please correct my (mis)understanding of scoping regards imports? Thanks, JS

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >