Search Results

Search found 17437 results on 698 pages for 'nick long'.

Page 634/698 | < Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >

  • WPF MVVM: Convention over Configuration for ResourceDictionary ?

    - by Jeffrey Knight
    Update In the wiki spirit of StackOverflow, here's an update: I spiked Joe White's IValueConverter suggestion below. It works like a charm. I've written a "quickstart" example of this that automates the mapping of ViewModels-Views using some cheap string replacement. If no View is found to represent the ViewModel, it defaults to an "Under Construction" page. I'm dubbing this approach "WPF MVVM White" since it was Joe White's idea. Here are a couple screenshots. The first image is a case of "[SomeControlName]ViewModel" has a corresponding "[SomeControlName]View", based on pure naming convention. The second is a case where the ModelView doesn't have any views to represent it. No more ResourceDictionaries with long ViewModel to View mappings. It's pure naming convention now. I'm hosting a download of the project here: http://rootsilver.com/files/Mvvm.White.Quickstart.zip I'll follow up with a longer blog post walk through. Original Post I read Josh Smith's fantastic MSDN article on WPF MVVM over the weekend. It's destined to be a cult classic. It took me a while to wrap my head around the magic of asking WPF to render the ViewModel. It's like saying "Here's a class, WPF. Go figure out which UI to use to present it." For those who missed this magic, WPF can do this by looking up the View for ModelView in the ResourceDictionary mapping and pulling out the corresponding View. (Scroll down to Figure 10 Supplying a View ). The first thing that jumps out at me immediately is that there's already a strong naming convention of: classNameView ("View" suffix) classNameViewModel ("ViewModel" suffix) My question is: Since the ResourceDictionary can be manipulated programatically, I"m wondering if anyone has managed to Regex.Replace the whole thing away, so the lookup is automatic, and any new View/ViewModels get resolved by virtue of their naming convention? [Edit] What I'm imagining is a hook/interception into ResourceDictionary. ... Also considering a method at startup that uses interop to pull out *View$ and *ViewModel$ class names to build the DataTemplate dictionary in code: //build list foreach .... String.Format("<DataTemplate DataType=\"{x:Type vm:{0} }\"><v:{1} /></DataTemplate>", ...)

    Read the article

  • Do you still limit line length in code?

    - by Noldorin
    This is a matter on which I would like to gauge the opinion of the community: Do you still limit the length of lines of code to a fixed maximum? This was certainly a convention of the past for many languages; one would typically cap the number of characters per line to a value such as 80 (and more recnetly 100 or 120 I believe). As far as I understand, the primary reasons for limiting line length are: Readability - You don't have to scroll over horizontally when you want to see the end of some lines. Printing - Admittedly (at least in my experience), most code that you are working on does not get printed out on paper, but by limiting the number of characters you can insure that formatting doesn't get messed up when printed. Past editors (?) - Not sure about this one, but I suspect that at some point in the distant past of programming, (at least some) text editors may have been based on a fixed-width buffer. I'm sure there are points that I am still missing out, so feel free to add to these... Now, when I tend to observe C or C# code nowadays, I often see a number of different styles, the main ones being: Line length capped to 80, 100, or even 120 characters. As far as I understand, 80 is the traditional length, but the longer ones of 100 and 120 have appeared because of the widespread use of high resolutions and widescreen monitors nowadays. No line length capping at all. This tends to be pretty horrible to read, and I don't see it too often, though it's certainly not too rare either. Inconsistent capping of line length. The length of some lines are limited to a fixed maximum (or even a maximum that changes depending on the file/location in code), while others (possibly comments) are not at all. My personal preference here (at least recently) has been to cap the line length to 100 in the Visual Studio editor. This means that in a decently sized window (on a non-widescreen monitor), the ends of lines are still fully visible. I can however see a few disadvantages in this, especially when you end up writing code that's indented 3 or 4 levels and then having to include a long string literal - though I often take this as a sign to refactor my code! In particular, I am curious what the C and C# coders (or anyone who uses Visual Studio for that matter) think about this point, though I would be interested in hearing anyone's thoughts on the subject. Edit Thanks for the all answers - I appreciate the variety of opinions here, all presenting sound reasons. Consensus does seem to be tipping in the direction of always (or almost always) limit the line length. Interestingly, it seems to be in various coding standards to limit the line length. Judging by some of the answers, both the Python and Google CPP guidelines set the limit at 80 chars. I haven't seen anything similar regarding C# or VB.NET, but I would be curious to see if there are ones anywhere.

    Read the article

  • How to efficiently save changes made in UI/main thread with Core Data?

    - by Jaanus
    So, there have been several posts here about importing and saving data from an external data source into Core Data. Apple documents a reasonable pattern for this: "import and save on background thread, merge saved objects to main thread." All fine and good. I have a related but different problem: the user is modifying data in the UI and main thread, and thus modifies state of some objects in the managed object context (MOC). I would like to save these changes from time to time. What is a good way to do that? Now, you could say that I could do the same: create a background thread with its own MOC and pass the changed objectID-s there. The catch-22 for me with this is that an object's ID changes when it is saved, and I cannot guarantee the order of things happening. I may end up passing a different objectID into the background thread for the same object, based on whether the object has been previously saved or not, and I don't know if Core Data can resolve this and see that different objectID-s are pointing to the same object and not create duplicates for me. (I could test this, but I'm lazywebbing with this question first.) One thought I had: I could always do MOC saves on a background thread, and queue them up with operationqueue, so that there is always only one save in progress. I would not create a new MOC, I would just use the same MOC as in main thread. Now, this is not thread safe and when someone modifies the MOC in main thread while it is being saved in background thread, the results will probably be catastrophic. But, minus the thread safety, you can see what kind of solution I'd wish for. To be clear, the problem I need to fix is that if I just do the save in main thread, it blocks the UI for an unacceptably long period of time, I want to move the save to background thread. So, questions: what about the reasoning of an object ID changing during saving, and Core Data being able to resolve them to the same object? Would this be the right way of addressing this problem? any other good ways of doing this?

    Read the article

  • Subclassing a window from a thread in c#

    - by user258651
    I'm creating a thread that looks for a window. When it finds the window, it overrides its windowproc, and handles WM_COMMAND and WM_CLOSE. Here's the code that looks for the window and subclasses it: public void DetectFileDialogProc() { Window fileDialog = null; // try to find the dialog twice, with a delay of 500 ms each time for (int attempts = 0; fileDialog == null && attempts < 2; attempts++) { // FindDialogs enumerates all windows of class #32770 via an EnumWindowProc foreach (Window wnd in FindDialogs(500)) { IntPtr parent = NativeMethods.User32.GetParent(wnd.Handle); if (parent != IntPtr.Zero) { // we're looking for a dialog whose parent is a dialog as well Window parentWindow = new Window(parent); if (parentWindow.ClassName == NativeMethods.SystemWindowClasses.Dialog) { fileDialog = wnd; break; } } } } // if we found the dialog if (fileDialog != null) { OldWinProc = NativeMethods.User32.GetWindowLong(fileDialog.Handle, NativeMethods.GWL_WNDPROC); NativeMethods.User32.SetWindowLong(fileDialog.Handle, NativeMethods.GWL_WNDPROC, Marshal.GetFunctionPointerForDelegate(new WindowProc(WndProc)).ToInt32()); } } And the windowproc: public IntPtr WndProc(IntPtr hWnd, uint msg, IntPtr wParam, IntPtr lParam) { lock (this) { if (!handled) { if (msg == NativeMethods.WM_COMMAND || msg == NativeMethods.WM_CLOSE) { // adding to a list. i never access the window via the hwnd from this list, i just treat it as a number _addDescriptor(hWnd); handled = true; } } } return NativeMethods.User32.CallWindowProc(OldWinProc, hWnd, msg, wParam, lParam); } This all works well under normal conditions. But I am seeing two instances of bad behavior in order of badness: If I do not close the dialog within a minute or so, the app crashes. Is this because the thread is getting garbage collected? This would kind of make sense, as far as GC can tell the thread is done? If this is the case, (and I don't know that it is), how can I make the thread stay around as long as the dialog is around? If I immediately close the dialog with the 'X' button (WM_CLOSE) the app crashes. I believe its crashing in the windowproc, but I can't get a breakpoint in there. I'm getting an AccessViolationException, The exception says "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Its a race condition, but of what I don't know. FYI, I had been reseting the old windowproc once I processed the commands, but that was crashing even more often! Any ideas on how I can solve these issues?

    Read the article

  • Eclipselink read complex oject model in an ordered way

    - by Raven
    Hi, I need to read a complex model in an ordered way with eclipselink. The order is mandantory because it is a huge database and I want to have an output of a small portion of the database in a jface tableview. Trying to reorder it in the loading/quering thread takes too long and ordering it in the LabelProvider blocks the UI thread too much time, so I thought if Eclipselink could be used that way, that the database will order it, it might give me the performance I need. Unfortunately the object model can not be changed :-( The model is something like: @SuppressWarnings("serial") @Entity public class Thing implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; private String name; @OneToMany(cascade=CascadeType.ALL) @PrivateOwned private List<Property> properties = new ArrayList<Property>(); ... // getter and setter following here } public class Property implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; @OneToOne private Item item; private String value; ... // getter and setter following here } public class Item implements Serializable { @Id @GeneratedValue(strategy = GenerationType.TABLE) private int id; private String name; .... // getter and setter following here } // Code end In the table view the y-axis is more or less created with the query Query q = em.createQuery("SELECT m FROM Thing m ORDER BY m.name ASC"); using the "name" attribute from the Thing objects as label. In the table view the x-axis is more or less created with the query Query q = em.createQuery("SELECT m FROM Item m ORDER BY m.name ASC"); using the "name" attribute from the Item objects as label. Each cell has the value Things.getProperties().get[x].getValue() Unfortunately the list "properties" is not ordered, so the combination of cell value and x-axis column number (x) is not necessarily correct. Therefore I need to order the list "properties" in the same way as I ordered the labeling of the x-axis. And exactly this is the thing I dont know how it is done. So querying for the Thing objects should return the list "properties" "ORDER BY name ASC" but of the "Item"s objects. My ideas are something like having a query with two JOINs. Joing Things with Property and with Item but somehow I was unable to get it to work yet. Thank you for your help and your ideas to solve this riddle.

    Read the article

  • Becoming a professional programmer / software engineer

    - by Matt
    This isn't strictly about programming, more about being a programmer, so I'm sorry if its not the right kind of question to ask on this forum (mod, please delete if it isn't) I'm a computer tech in the US Army, and once I'm out I'll have eight years on the job. I'm about to start a degree through an online school (the only way I can get the army to pay for it while I'm still in), and I'm seriously looking at getting a computer science degree. I'm great with computers. I can take one apart and put it back together with my eyes closed. I'm A+ and Network+ certified and I'm getting a couple other CompTIA certs before I get out. I can work Windows as well as anyone on this planet and I'm not terrible with Linux. A job in computers is something I've always wanted. But, aside from being a computer technician, it seems that every job in the field requires programming ability. I like programming as a hobby. I programmed TI BASIC in high school and I'm teaching myself Python, but that's as far as my experience goes. That sort of brings me to my questions: I've always heard that the first language is the most difficult, and once you learn it well then all the others sort of fall into place for you. Is that true? Like, if I spend the next eight months mastering Python, will I pretty much be able to pick up at least fair proficiency in any other OO language within a month of studying it or whatever? How easy is it to burn out? the biggest thing I'm afraid of is just burning out on programming. I can go all day long if I'm programming strictly for my own personal desire, but I can imagine it being really easy to burn out after a few years of programming to deadlines and certain specifications. Especially if its a big project involving a dozen different designers. From what I told you about myself, would I already be qualified to work as a regular technician (geek squad type or maybe running a computer repair shop). Is Python a good base to learn from? I've heard that it makes you hate other languages because they feel more convoluted when learning, but also that its a great beginner language. If you're a professional programmer, did you have any of the same fears? Would you recommend that I stick to computer repair and Python rather than try to get into corporate programming? (just from what you've read in this thread, anyway) Thanks for taking the time to read all this and answer (if you did)

    Read the article

  • Consuming Web Services requiring Authentication from behind Proxy server

    - by Jan Petersen
    Hi All, I've seen a number of post about Proxy Authentication, but none that seams to address this problem. I'm building a SharePoint Web Service consuming desktop application, using Java, JAX-WS in NetBeans. I have a working prototype, that can query the server for authentication mode, successfully authenticate and retrieve a list of web site. However, if I run the same app from a network that is behind a proxy server (the proxy does not require authentication), then I'm running into trouble. The normal -dhttp.proxyHost ... settings does not seam to help any. But I have found that by creating a ProxySelector class and setting it as default, I can regain access to the authentication web service, but I still can't retrieve the list of web sites from the SharePoint server. It's almost as if the authentication I provide is going to the proxy rather than the SharePoint server. Anyone have any experience on how to make this work? I have put the source text java class files of a demo app up, showing the issue at the following urls (it's a bit to long even in the short demo form to post here). link text When running the code from a network behind a proxy server, I successfully retrieve the Authentication mode from the server, but the request for the Web Site list generates an exception originating at: com.sun.xml.internal.ws.transport.http.client .HttpClientTransport.readResponseCodeAndMessage(HttpClientTransport.java:201) The output from the source when no proxy is on the network is listed below: Successfully retrieved the SharePoint WebService response for Authentication SharePoint authentication method is: WINDOWS Calling Web Service to retrieve list of web site. Web Service call response: -------------- XML START -------------- <Webs xmlns="http://schemas.microsoft.com/sharepoint/soap/"> <Web Title="Collaboration Lab" Url="http://host.domain.com/collaboration"/> <Web Title="Global Data Lists" Url="http://host.domain.com/global_data_lists"/> <Web Title="Landing" Url="http://host.domain.com/Landing"/> <Web Title="SharePoint HelpDesk" Url="http://host.domain.com/helpdesk"/> <Web Title="Program Management" Url="http://host.domain.com/programmanagement"/> <Web Title="Project Site" Url="http://host.domain.com/Project Site"/> <Web Title="SharePoint Administration Tools" Url="http://host.domain.com/admin"/> <Web Title="Space Management Project" Url="http://host.domain.com/spacemgmt"/> </Webs> -------------- XML END -------------- Br Jan

    Read the article

  • How to add image in ListView android

    - by Wawan Den Frastøtende
    i would like to add images on my list view, and i have this code package com.wilis.appmysql; import android.app.ListActivity; import android.content.Intent; import android.os.Bundle; //import android.util.Log; import android.view.View; import android.widget.ArrayAdapter; import android.widget.ListView; import android.widget.Toast; public class menulayanan extends ListActivity { /** Called when the activity is first created. */ public void onCreate(Bundle icicle) { super.onCreate(icicle); // Create an array of Strings, that will be put to our ListActivity String[] menulayanan = new String[] { "Berita Terbaru", "Info Item", "Customer Service", "Help","Exit"}; //Menset nilai array ke dalam list adapater sehingga data pada array akan dimunculkan dalam list this.setListAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1,menulayanan)); } @Override /**method ini akan mengoveride method onListItemClick yang ada pada class List Activity * method ini akan dipanggil apabilai ada salah satu item dari list menu yang dipilih */ protected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); // Get the item that was clicked // Menangkap nilai text yang dklik Object o = this.getListAdapter().getItem(position); String pilihan = o.toString(); // Menampilkan hasil pilihan menu dalam bentuk Toast tampilkanPilihan(pilihan); } /** * Tampilkan Activity sesuai dengan menu yang dipilih * */ protected void tampilkanPilihan(String pilihan) { try { //Intent digunakan untuk sebagai pengenal suatu activity Intent i = null; if (pilihan.equals("Berita Terbaru")) { i = new Intent(this, PraBayar.class); } else if (pilihan.equals("Info Item")) { i = new Intent(this, PascaBayar.class); } else if (pilihan.equals("Customer Service")) { i = new Intent(this, CustomerService.class); } else if (pilihan.equals("Help")) { i = new Intent(this, Help.class); } else if (pilihan.equals("Exit")) { finish(); } else { Toast.makeText(this,"Anda Memilih: " + pilihan + " , Actionnya belum dibuat", Toast.LENGTH_LONG).show(); } startActivity(i); } catch (Exception e) { e.printStackTrace(); } } } and i want to add different image per list, so i mean is i want to add a.png to "Berita Terbaru", b.png to "Info Item", c.png "Customer Service" , so how to do it? i was very confused about this, thanks before...

    Read the article

  • TFS CM resource recommendations / some questions

    - by John
    I am working with a small development shop that consists of a group of 5 developers and 1 QA person. We are using TFS and need to get more sophisticated on how we use this tool. Currently the development team checks in their code each evening. A nightly build runs and pushes the output out on a network share. Our QA person uses this build for testing the next day. Sometimes the build off the trunk codebase has issues/bugs that hinder the QA process, and it hasn’t been a giant issue in the past, but we now want to get to a state where we have our QA person testing on a stable QA build. So I believe we need to create a branch (call it QA), and the developers will continue to develop off the trunk, but the QA person will use builds created from code in the QA branch. Seems simple enough, but we have started doing code reviews as well. So we have another desire in that only code that has been code reviewed can be promoted to the QA branch. Each developer works off a TFS item, and when they check in a changeset, they do it against a TFS item which creates a link between a checked in code file and a TFS item. Eventually the TFS item becomes complete and ready for code review. All code attached to the TFS item is reviewed. How can the versions of these files get promoted to the QA branch? In the QA branch, if a bug is found, we want to fix it in the QA branch and have the changes migrated back to the trunk. I believe TFS has a way to automatically do this doesn’t it? Long story short, we want to get to a build and CM environment that I believe is pretty standard, but we are unaware of how to make this happen with TFS. Given our situation above, can someone point out a book or website(s) that would address our specific needs? We would like to make this happen without having to get too deep in CM theory or TFS. I very much appreciate any and all suggestions! Thanks, John

    Read the article

  • ld: symbol(s) not found with OpenSSL (libssl)

    - by Benjamin
    Hi all, I'm trying to build TorTunnel on my mac. I've successfully installed the Boost library and its development files. TorTunnel also requires the OpenSSL and its development files. I've got them installed in /usr/lib/libssl.dylib and /usr/include/openssl/. When I run the make command this is the error i'm getting: g++ -ggdb -g -O2 -lssl -lboost_system-xgcc42-mt-1_38 -o torproxy TorProxy.o HybridEncryption.o Connection.o Cell.o Directory.o ServerListing.o Util.o Circuit.o CellEncrypter.o RelayCellDispatcher.o CellConsumer.o ProxyShuffler.o CreateCell.o CreatedCell.o TorTunnel.o SocksConnection.o Network.o Undefined symbols: "_BN_hex2bn", referenced from: Circuit::initializeDhParameters() in Circuit.o "_BN_free", referenced from: Circuit::~Circuit()in Circuit.o Circuit::~Circuit()in Circuit.o CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o "_DH_generate_key", referenced from: Circuit::initializeDhParameters() in Circuit.o "_PEM_read_bio_RSAPublicKey", referenced from: ServerListing::getOnionKey() in ServerListing.o "_BIO_s_mem", referenced from: Connection::initializeSSL() in Connection.o Connection::initializeSSL() in Connection.o "_DH_free", referenced from: Circuit::~Circuit()in Circuit.o "_BIO_ctrl_pending", referenced from: Connection::writeFromBuffer(boost::function)in Connection.o "_RSA_size", referenced from: HybridEncryption::encryptInSingleChunk(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o HybridEncryption::encryptInHybridChunk(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o HybridEncryption::encrypt(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o "_RSA_public_encrypt", referenced from: HybridEncryption::encryptInSingleChunk(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o HybridEncryption::encryptInHybridChunk(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o "_BN_num_bits", referenced from: CreateCell::CreateCell(unsigned short, dh_st*, rsa_st*)in CreateCell.o CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o CreatedCell::isValid() in CreatedCell.o "_SHA1", referenced from: CellEncrypter::expandKeyMaterial(unsigned char*, int, unsigned char*, int)in CellEncrypter.o "_BN_bn2bin", referenced from: CreateCell::CreateCell(unsigned short, dh_st*, rsa_st*)in CreateCell.o "_BN_bin2bn", referenced from: CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o "_DH_compute_key", referenced from: CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o "_BIO_new", referenced from: Connection::initializeSSL() in Connection.o Connection::initializeSSL() in Connection.o "_BIO_new_mem_buf", referenced from: ServerListing::getOnionKey() in ServerListing.o "_AES_ctr128_encrypt", referenced from: HybridEncryption::AES_encrypt(unsigned char*, int, unsigned char*, unsigned char*, int)in HybridEncryption.o CellEncrypter::aesOperate(Cell&, aes_key_st*, unsigned char*, unsigned char*, unsigned int*)in CellEncrypter.o "_BIO_read", referenced from: Connection::writeFromBuffer(boost::function)in Connection.o "_SHA1_Update", referenced from: CellEncrypter::calculateDigest(SHAstate_st*, RelayCell&, unsigned char*)in CellEncrypter.o CellEncrypter::initKeyMaterial(unsigned char*)in CellEncrypter.o CellEncrypter::initKeyMaterial(unsigned char*)in CellEncrypter.o "_SHA1_Final", referenced from: CellEncrypter::calculateDigest(SHAstate_st*, RelayCell&, unsigned char*)in CellEncrypter.o "_DH_size", referenced from: CreatedCell::getKeyMaterial(unsigned char**, unsigned char**)in CreatedCell.o "_DH_new", referenced from: Circuit::initializeDhParameters() in Circuit.o "_BIO_write", referenced from: Connection::readIntoBufferComplete(boost::function, boost::system::error_code const&, unsigned long)in Connection.o "_RSA_free", referenced from: Circuit::~Circuit()in Circuit.o "_BN_dup", referenced from: Circuit::initializeDhParameters() in Circuit.o Circuit::initializeDhParameters() in Circuit.o "_BN_new", referenced from: Circuit::initializeDhParameters() in Circuit.o Circuit::initializeDhParameters() in Circuit.o "_SHA1_Init", referenced from: CellEncrypter::CellEncrypter()in CellEncrypter.o CellEncrypter::CellEncrypter()in CellEncrypter.o "_RAND_bytes", referenced from: HybridEncryption::encryptInHybridChunk(unsigned char*, int, unsigned char**, int*, rsa_st*)in HybridEncryption.o Util::getRandomId() in Util.o "_AES_set_encrypt_key", referenced from: HybridEncryption::AES_encrypt(unsigned char*, int, unsigned char*, unsigned char*, int)in HybridEncryption.o CellEncrypter::initKeyMaterial(unsigned char*)in CellEncrypter.o CellEncrypter::initKeyMaterial(unsigned char*)in CellEncrypter.o "_BN_set_word", referenced from: Circuit::initializeDhParameters() in Circuit.o "_RSA_new", referenced from: ServerListing::getOnionKey() in ServerListing.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [torproxy] Error 1 Any idea how I could fix it?

    Read the article

  • Adding marker to the retrieved location

    - by Rahul Varma
    I have displayed the map in my app by using the following code. I have retrieved info from the database and displayed the map. Now i want to add marker to the retrieved location... googleMao.java public class googleMap extends MapActivity{ private MapView mapView; private MapController mc; GeoPoint p; long s; Cursor cur; SQLiteDatabase db; createSqliteHelper csh; String qurry; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.map); // String qurry=getIntent().getStringExtra("value"); //here is calling the map string qurry s = getIntent().getLongExtra("value",2); map(); mapView = (MapView) findViewById(R.id.mapview1); LinearLayout zoomLayout = (LinearLayout)findViewById(R.id.zoom); View zoomView = mapView.getZoomControls(); zoomLayout.addView(zoomView, new LinearLayout.LayoutParams( LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT)); //mapView.displayZoomControls(true); mapView.setBuiltInZoomControls(true); mc = mapView.getController(); String coordinates[] = {"1.352566007", "103.78921587"}; double lat = Double.parseDouble(coordinates[0]); double lng = Double.parseDouble(coordinates[1]); Geocoder geoCoder = new Geocoder(this, Locale.getDefault()); try { List<Address> addresses = geoCoder.getFromLocationName(qurry,5); String add = ""; if (addresses.size() > 0) { p = new GeoPoint( (int) (addresses.get(0).getLatitude() * 1E6), (int) (addresses.get(0).getLongitude() * 1E6)); mc.animateTo(p); mapView.invalidate(); mc.setZoom(6); } } catch (IOException e) { e.printStackTrace(); } } @Override protected boolean isRouteDisplayed() { // Required by MapActivity return false; } public void map() { String[] str={"type"}; int[] i={R.id.type}; csh=new createSqliteHelper(this); db=csh.getReadableDatabase(); cur=db.rawQuery("select type from restaurants where _id="+s,null); if(cur.moveToFirst()) { qurry=cur.getString(cur.getColumnIndex("type")); } } }

    Read the article

  • Using XStream to deserialize an XML response with separate "success" and "failure" forms?

    - by Chris Markle
    I am planning on using XStream with Java to convert between objects and XML requests and XML responses and objects, where the XML is flowing over HTTP/HTTPS. On the response side, I can get a "successful" response, which seems like it would map to one Java class, or a "failure" response, which seems like it would map to another Java class. For example, for a "file list" request, I could get an affirmative response e.g., <?xml version="1.0" encoding="UTF-8"?> <response> <success>true</success> <files> <file>[...]</file> <file>[...]</file> <file>[...]</file> </files> </response> or I could get a negative response e.g., <?xml version="1.0" encoding="UTF-8"?> <response> <success>false</success> <error> <errorCode>-502</errorCode> <systemMessage>[...]AuthenticationException</systemMessage> <userMessage>Not authenticated</userMessage> </error> </response> To handle this, should I include fields in one class for both cases or should I somehow use XStream to "conditionally" create one of the two potential classes? The case with fields from both response cases in the same object would look something like this: Class Response { boolean success; ArrayList<File> files; ResponseError error; [...] } Class File { String name; long size; [...] } Class ResponseError { int errorCode; String systemMessage; String userMessage; [...] } I don't know what the "use XStream and create different objects in case of success or error" looks like. Is it possible to do that somehow? Is it better or worse way to go? Anyway, any advice on how to handle using XStream to deal with this success vs. failure response case would be appreciated. Thanks in advance!

    Read the article

  • How to setup a webserver in common lisp?

    - by Serpico
    Several months ago, I was inspired by the magnificent book ANSI Common Lisp written by Paul Graham, and the statement that Lisp could be used as a secret weapon in your web development, published by the same author on his blog. Wow, that is amazing. That is something that I have been looking for long time. The author really developed a successful web applcation and sold it to Yahoo. With those encouraging images, I determined to spend some time (1 year or 2 year, who knows) on learning Common Lisp. Maybe someday I will development my web application and turn into a great Lisp expert. In fact, this is the second time for me to get to study Lisp. The first time was a couple of years ago when I was fascinated by the famous book SICP but found later Scheme was so unbelievably immature for real life application. After reading some chapters of ANSI Common Lisp, I was pretty sure that is a great book full of detailed exploration of Common Lisp. Then I began to set up a web server in Common Lisp. After all, this should be the best way if you want to learn something. Demonstrations are always better than definations. As suggested by the book Practical Common Lisp (by the way, this is also a great book), I chose to install AllegroServe on some Common Lisp implementation. Then, from somewhere else, I learned that Hunchentoot seems to be better than AllegroServe. (I don't remember where and whom this word is from. So don't argue with me.) Ironically, you know what, I never could installed the two packages on any Common Lisp implementation. More annoyingly, I even don't know why. The machine always spit up a lot of jargon and lead me into a chaos. I've tried searching the internet and have not found anything. Could anybody who has successfully installed these packages in Linux tell me how you did it? Have you run into any trouble? How did you figured out what is wrong and fixed it? The more detailed, the more helpful.

    Read the article

  • Creating a variable list Pashua, OS X & Bash.

    - by S1syphus
    First of all, for those that don't know pashua is a tool for creating native Aqua dialog windows. An example of what a window config looks like is # pashua_run() # Define what the dialog should be like # Take a look at Pashua's Readme file for more info on the syntax conf=" # Set transparency: 0 is transparent, 1 is opaque *.transparency=0.95 # Set window title *.title = Introducing Pashua # Introductory text tb.type = text tb.default = "HELLO WORLD" tb.height = 276 tb.width = 310 tb.x = 340 tb.y = 44 if [ -e "$icon" ] then # Display Pashua's icon conf="$conf img.type = image img.x = 530 img.y = 255 img.path = $icon" fi if [ -e "$bgimg" ] then # Display background image conf="$conf bg.type = image bg.x = 30 bg.y = 2 bg.path = $bgimg" fi pashua_run "$conf" echo " tb = $tb" The problem is, Pashua can't really get output from stdout, but it can get arguments. Following on from what Dennis Williamson posted here. What ideally it should do is generate an output file based on information from a text file, To executed in pashua_run ore add the pashua_run around the window argument: count=1 while read -r i do echo "AB${count}.type = openbrowser" echo "AB${count}.label = Choose a master playlist file" echo "AB${count}.width=310" echo "AB${count}.tooltip = Blabla filesystem browser" echo "some text with a line from the file: $i" (( count++ )) done < TEST.txt >> long.txt SO the output is AB1.type = openbrowser AB1.label = Choose a master playlist file AB1.width=310 AB1.tooltip = Blabla filesystem browser some text with a line from the file: foo AB2.type = openbrowser AB2.label = Choose a master playlist file AB2.width=310 AB2.tooltip = Blabla filesystem browser some text with a line from the file: bar AB3.type = openbrowser AB3.label = Choose a master playlist file AB3.width=310 AB3.tooltip = Blabla filesystem browser some text with a line from the file: dev AB4.type = openbrowser AB4.label = Choose a master playlist file AB4.width=310 AB4.tooltip = Blabla filesystem random So if there is a clever way to get the output of that and place it into pashua run would be cool, on the fly: I.E load te contents of TEST.txt and generate the place it into pashua_run, I've tried using cat and opening the file... but because it's in Pashua_run it doesn't work, is there a smart way to break out then back in? Or the second way which I was thinking, was create get the output then append it into the middle text file containing the pashua runtime, then execute it, maybe slightly hacky, but I would imagine it will do the job. Any ideas? ++ I know I probably could make my life a lot easier, by doing this in actionscript and cocoa, although at present don't have time for such a learning curve, although I do plan to get round to it.

    Read the article

  • What is the best workaround for the WCF client `using` block issue?

    - by Eric King
    I like instantiating my WCF service clients within a using block as it's pretty much the standard way to use resources that implement IDisposable: using (var client = new SomeWCFServiceClient()) { //Do something with the client } But, as noted in this MSDN article, wrapping a WCF client in a using block could mask any errors that result in the client being left in a faulted state (like a timeout or communication problem). Long story short, when Dispose() is called, the client's Close() method fires, but throws and error because it's in a faulted state. The original exception is then masked by the second exception. Not good. The suggested workaround in the MSDN article is to completely avoid using a using block, and to instead instantiate your clients and use them something like this: try { ... client.Close(); } catch (CommunicationException e) { ... client.Abort(); } catch (TimeoutException e) { ... client.Abort(); } catch (Exception e) { ... client.Abort(); throw; } Compared to the using block, I think that's ugly. And a lot of code to write each time you need a client. Luckily, I found a few other workarounds, such as this one on IServiceOriented. You start with: public delegate void UseServiceDelegate<T>(T proxy); public static class Service<T> { public static ChannelFactory<T> _channelFactory = new ChannelFactory<T>(""); public static void Use(UseServiceDelegate<T> codeBlock) { IClientChannel proxy = (IClientChannel)_channelFactory.CreateChannel(); bool success = false; try { codeBlock((T)proxy); proxy.Close(); success = true; } finally { if (!success) { proxy.Abort(); } } } } Which then allows: Service<IOrderService>.Use(orderService => { orderService.PlaceOrder(request); } That's not bad, but I don't think it's as expressive and easily understandable as the using block. The workaround I'm currently trying to use I first read about on blog.davidbarret.net. Basically you override the client's Dispose() method wherever you use it. Something like: public partial class SomeWCFServiceClient : IDisposable { void IDisposable.Dispose() { if (this.State == CommunicationState.Faulted) { this.Abort(); } else { this.Close(); } } } This appears to be able to allow the using block again without the danger of masking a faulted state exception. So, are there any other gotchas I have to look out for using these workarounds? Has anybody come up with anything better?

    Read the article

  • Volunteer for a potential employer?

    - by EoRaptor013
    I've been looking for work since March, and haven't had much luck. Recently, however, I interviewed with a small company near my home for a C#, .NET, SQL development position. I hit it off very well with the hiring manager during the phone screen, and even more so during the face to face. Unfortunately, I failed the practical test: wiring up a web form, creating a couple of SQL stored procedures, saving new data with validation, and creating a minimal search screen. I knew what I was doing, but I was too slow to meet their standards as all the work needed to be done within an hour. Nevertheless, I really liked the place, the environment, the people who I would have been working with, and the boss. (I gave the company an 11 on Joel's 12 point scale.) So, the obvious next step was to scrape the rust off. I've been trying to create little projects for myself, but I don't know that I've been effective in getting any faster. What with all that goes into creating a project, I'm not heads-down coding as much as I think I need. Now, with all that introduction, here's the question. I have been thinking about calling the hiring manager at that place, and asking him to let me volunteer for three or four weeks, with no strings attached. I think it would benefit me, and wouldn't cost him anything (as long as I didn't slow the existing people down!). At the end of that period, he might, or might not, be inclined to hire me, but even if not, I would have had as much as 160 hours of in the trenches development. Maybe not all shiny, but no more rust, I would think. Does this plan make any sense at all? I certainly don't want to sound desperate (although, I'm not far from being there), and I very much need the tuneup, lube, and change the oil. What's the downside, if any, to me doing this? Do any of you see red flags going up—either from the prerspective of the hiring manager, or from the perspective of a developer?

    Read the article

  • Defining where on the page the flowdocument I am printing will 'start' and 'end'

    - by Sagi1981
    Dear community. I am almost done with implementing a printing functionality, but I am having trouble getting the last hurdle done with. My problem is, that I am printing some reports, consisting of a header (with information about the person the report is about), a footer (with a page number) and the content in the middle, which is a FlowDocument. Since the flowdocuments can be fairly long, It is very possible that they will span multiple pages. My approach is to make a custom FlowDocumentPaginator which derives from DocumentPaginator. In there i define my header and my footer. However, when I print my page, the flowdocument and my header and footer are on top of eachother. So my question is plain and simple - how do I define from where and to where the flowdocument part on the pages will be placed? here is the code from my custommade Paginator: public class HeaderedFlowDocumentPaginator : DocumentPaginator { private DocumentPaginator flowDocumentpaginator; public HeaderedFlowDocumentPaginator(FlowDocument document) { flowDocumentpaginator = ((IDocumentPaginatorSource) document).DocumentPaginator; } public override bool IsPageCountValid { get { return flowDocumentpaginator.IsPageCountValid; } } public override int PageCount { get { return flowDocumentpaginator.PageCount; } } public override Size PageSize { get { return flowDocumentpaginator.PageSize; } set { flowDocumentpaginator.PageSize = value; } } public override IDocumentPaginatorSource Source { get { return flowDocumentpaginator.Source; } } public override DocumentPage GetPage(int pageNumber) { DocumentPage page = flowDocumentpaginator.GetPage(pageNumber); ContainerVisual newVisual = new ContainerVisual(); newVisual.Children.Add(page.Visual); DrawingVisual header = new DrawingVisual(); using (DrawingContext dc = header.RenderOpen()) { //Header data } newVisual.Children.Add(header); DrawingVisual footer = new DrawingVisual(); using (DrawingContext dc = footer.RenderOpen()) { Typeface typeface = new Typeface("Trebuchet MS"); FormattedText text = new FormattedText("Page " + (pageNumber + 1).ToString(), CultureInfo.CurrentCulture, FlowDirection.LeftToRight, typeface, 14, Brushes.Black); dc.DrawText(text, new Point(page.Size.Width - 100, page.Size.Height-30)); } newVisual.Children.Add(footer); DocumentPage newPage = new DocumentPage(newVisual); return newPage; } } And here is the printdialogue call: private void btnPrint_Click(object sender, RoutedEventArgs e) { try { PrintDialog printDialog = new PrintDialog(); if (printDialog.ShowDialog() == true) { FlowDocument fd = new FlowDocument(); MemoryStream stream = new MemoryStream(ASCIIEncoding.Default.GetBytes(<My string of text - RTF formatted>)); TextRange tr = new TextRange(fd.ContentStart, fd.ContentEnd); tr.Load(stream, DataFormats.Rtf); stream.Close(); fd.ColumnWidth = printDialog.PrintableAreaWidth; HeaderedFlowDocumentPaginator paginator = new HeaderedFlowDocumentPaginator(fd); printDialog.PrintDocument(paginator, "myReport"); } } catch (Exception ex) { //Handle } }

    Read the article

  • Am I Leaking ADO.NET Connections?

    - by HardCode
    Here is an example of my code in a DAL. All calls to the database's Stored Procedures are structured this way, and there is no in-line SQL. Friend Shared Function Save(ByVal s As MyClass) As Boolean Dim cn As SqlClient.SqlConnection = Dal.Connections.MyAppConnection Dim cmd As New SqlClient.SqlCommand Try cmd.Connection = cn cmd.CommandType = CommandType.StoredProcedure cmd.CommandText = "proc_save_my_class" cmd.Parameters.AddWithValue("@param1", s.Foo) cmd.Parameters.AddWithValue("@param2", s.Bar) Return True Finally Dal.Utility.CleanupAdoObjects(cmd, cn) End Try End Function Here is the Connection factory (if I am using the correct term): Friend Shared Function MyAppConnection() As SqlClient.SqlConnection Dim cn As New SqlClient.SqlConnection(ConfigurationManager.ConnectionStrings("MyConnectionString").ToString) cn.Open() If cn.State <> ConnectionState.Open Then ' CriticalException is a custom object inheriting from Exception. Throw New CriticalException("Could not connect to the database.") Else Return cn End If End Function Here is the Dal.Utility.CleaupAdoObjects() function: Friend Shared Sub CleanupAdoObjects(ByVal cmd As SqlCommand, ByVal cn As SqlConnection) If cmd IsNot Nothing Then cmd.Dispose() If cn IsNot Nothing AndAlso cn.State <> ConnectionState.Closed Then cn.Close() End Sub I am getting a lot of "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." error messages reported by the users. The application's DAL opens a connection, reads or saves data, and closes it. No connections are ever left open - intentionally! There is nothing obvious on the Windows 2000 Server hosting the SQL Server 2000 that would indicate a problem. Nothing in the Event Logs and nothing in the SQL Server logs. The timeouts happen randomly - I cannot reproduce. It happens early in the day with only 1 to 5 users in the system. It also happens with around 50 users in the system. The most connections to SQL Server via Performance Monitor, for all databases, has been about 74. The timeouts happen in code that both saves to, and reads from, the database in different parts of the application. The stack trace does not point to one or two offending DAL functions. It's happened in many different places. Does my ADO.NET code appear to be able to leak connections? I've goolged around a bit, and I've read that if the connection pool fills up, this can happen. However, I'm not explicitly setting any connection pooling. I've even tried to increase the Connection Timeout in the connection string, but timeouts happen long before the 300 second (5 minute) value: <add name="MyConnectionString" connectionString="Data Source=MyServer;Initial Catalog=MyDatabase;Integrated Security=SSPI;Connection Timeout=300;"/> I'm at a total loss already as to what is causing these Timeout issues. Any ideas are appreciated.

    Read the article

  • What is the best python module skeleton code?

    - by user213060
    == Subjective Question Warning == Looking for well supported opinions or supporting evidence. Let us assume that skeleton code can be good. If you disagree with the very concept of module skeleton code then fine, but please refrain from repeating that opinion here. Many python IDE's will start you with a template like: print 'hello world' That's not enough... So here's my skeleton code to get this question started: My Module Skeleton, Short Version: #!/usr/bin/env python """ Module Docstring """ # ## Code goes here. # def test(): """Testing Docstring""" pass if __name__=='__main__': test() and, My Module Skeleton, Long Version: #!/usr/bin/env python # -*- coding: ascii -*- """ Module Docstring Docstrings: http://www.python.org/dev/peps/pep-0257/ """ __author__ = 'Joe Author ([email protected])' __copyright__ = 'Copyright (c) 2009-2010 Joe Author' __license__ = 'New-style BSD' __vcs_id__ = '$Id$' __version__ = '1.2.3' #Versioning: http://www.python.org/dev/peps/pep-0386/ # ## Code goes here. # def test(): """ Testing Docstring""" pass if __name__=='__main__': test() Notes: """ ===MODULE TYPE=== Since the vast majority of my modules are "library" types, I have constructed this example skeleton as such. For modules that act as the main entry for running the full application, you would make changes such as running a main() function instead of the test() function in __main__. ===VERSIONING=== The following practice, specified in PEP8, no longer makes sense: __version__ = '$Revision: 1.2.3 $' for two reasons: (1) Distributed version control systems make it neccessary to include more than just a revision number. E.g. author name and revision number. (2) It's a revision number not a version number. Instead, the __vcs_id__ variable is being adopted. This expands to, for example: __vcs_id__ = '$Id: example.py,v 1.1.1.1 2001/07/21 22:14:04 goodger Exp $' ===VCS DATE=== Likewise, the date variable has been removed: __date__ = '$Date: 2009/01/02 20:19:18 $' ===CHARACTER ENCODING=== If the coding is explicitly specified, then it should be set to the default setting of ascii. This can be modified if necessary (rarely in practice). Defaulting to utf-8 can cause anomalies with editors that have poor unicode support. """ There are a lot of PEPs that put forward coding style recommendations. Am I missing any important best practices? What is the best python module skeleton code? Update Show me any kind of "best" that you prefer. Tell us what metrics you used to qualify "best".

    Read the article

  • Linux Device Driver: Symbol "memcpy" not found

    - by Hinton
    Hello, I'm trying to write a Linux device driver. I've got it to work really well, until I tried to use "memcpy". I don't even get a compiler error, when I "make" it just warns me: WARNING: "memcpy" [/root/homedir/sv/main.ko] undefined! OK and when I try to load via insmod, I get on the console: insmod: error inserting './main.ko': -1 Unknown symbol in module and on dmesg: main: Unknown symbol memcpy (err 0) I include the following: #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/init.h> #include <linux/kernel.h> /* printk() */ #include <linux/slab.h> /* kmalloc() */ #include <linux/fs.h> /* everything... */ #include <linux/errno.h> /* error codes */ #include <linux/types.h> /* size_t */ #include <linux/fcntl.h> /* O_ACCMODE */ #include <linux/cdev.h> #include <asm/system.h> /* cli(), *_flags */ #include <asm/uaccess.h> /* copy_*_user */ The function using memcpy: static int dc_copy_to_user(char __user *buf, size_t count, loff_t *f_pos, struct sv_data_dev *dev) { char data[MAX_KEYLEN]; size_t i = 0; /* Copy the bulk as long as there are 10 more bytes to copy */ while (i < (count + MAX_KEYLEN)) { memcpy(data, &dev->data[*f_pos + i], MAX_KEYLEN); ec_block(dev->key, data, MAX_KEYLEN); if (copy_to_user(&buf[i], data, MAX_KEYLEN)) { return -EFAULT; } i += MAX_KEYLEN; } return 0; } Could someone help me? I thought the thing was in linux/string.h, but I get the error just the same. I'm using kernel 2.6.37-rc1 (I'm doing in in user-mode-linux, which works only since 2.6.37-rc1). Any help is greatly appreciated. # Context dependent makefile that can be called directly and will invoke itself # through the kernel module building system. KERNELDIR=/usr/src/linux ifneq ($(KERNELRELEASE),) EXTRA_CFLAGS+=-I $(PWD) -ARCH=um obj-m := main.o else KERNELDIR ?= /lib/modules/$(shell uname -r)/build PWD = $(shell pwd) all: $(MAKE) V=1 ARCH=um -C $(KERNELDIR) M=$(PWD) modules clean: rm -rf Module.symvers .*.cmd *.ko .*.o *.o *.mod.c .tmp_versions *.order endif

    Read the article

  • notification scheduling question

    - by sims
    Hi Stackers! I'm building an app that needs to send out notifications to users depending on user definable "notifications". So the notifications are not per event. They are arbitrary. A cron job should query the database and send out emails when it finds an event with matching criterion. The is a scheduling app. So naturally, one of the criterion is time. I think I've figured it out, but I'm sure there are better ideas out there as it seems to be a fairly common thing to do. I think I'll limit the users ability to "minutes before hand" to get notified as opposed to seconds or hours. So I have an eventA and notificationA. notificationA should be triggered if an event is due within 45 minutes. So eventA starts at 17:30. The user should be notified at 16:45. But the cron job might not run exactly at 00 seconds. So when the difference is not 45 minutes and 0 seconds it is, say, 45 minutes and 5 seconds. Notification time is past. Email doesn't get sent. User misses event. Shugar. We should also take into account that the cron job might take a long time to run. So maybe we should only trigger it every 5 minutes. So we need a bigger interval maybe. So then my guess would be to say that: if ((eventDueTime - now notificationTimeValue - interval) && (eventDueTime - now < notificationTimeValue + interval)) sendTheFrikinNotificationAlready(); It seems kind of risky if the there are thousands of notifications to send out. I guess I could make a thread for each notification and then a thread for each event that matches the criterion. That might help. Does that make sense? Any other ideas? Thanks!

    Read the article

  • CSS Transform offset varies with text length

    - by Mr. Polywhirl
    I have set up a demo that has 5 floating <div>s with rotated text of varying length. I am wondering if there is a way to have a CSS class that can handle centering of all text regardless of length. At the moment I have to create a class for each length of characters in my stylesheet. This could get too messy. I have also noticed that the offsets get screwd up is I increase or decrease the size of the wrapping <div>. I will be adding these classes to divs through jQuery, but I still have to set up each class for cross-browser compatibility. .transform3 { -webkit-transform-origin: 65% 100%; -moz-transform-origin: 65% 100%; -ms-transform-origin: 65% 100%; -o-transform-origin: 65% 100%; transform-origin: 65% 100%; } .transform4 { -webkit-transform-origin: 70% 110%; -moz-transform-origin: 70% 110%; -ms-transform-origin: 70% 110%; -o-transform-origin: 70% 110%; transform-origin: 70% 110%; } .transform5 { -webkit-transform-origin: 80% 120%; -moz-transform-origin: 80% 120%; -ms-transform-origin: 80% 120%; -o-transform-origin: 80% 120%; transform-origin: 80% 120%; } .transform6 { -webkit-transform-origin: 85% 136%; -moz-transform-origin: 85% 136%; -ms-transform-origin: 85% 136%; -o-transform-origin: 85% 136%; transform-origin: 85% 136%; } .transform7 { -webkit-transform-origin: 90% 150%; -moz-transform-origin: 90% 150%; -ms-transform-origin: 90% 150%; -o-transform-origin: 90% 150%; transform-origin: 90% 150%; } Note: The offset values I set were eye-balled. Update Although I would like this handled with a stylesheet, I believe that I will have to calculate the transformations of the CSS in JavaScript. I have created the following demo to demonstrate dynamic transformations. In this demo, the user can adjust the font-size of the .v_text class and as long as the length of the text does not exceed the .v_text_wrapper height, the text should be vertically aligned in the center, but be aware that I have to adjust the magicOffset value. Well, I just noticed that this does not work in iOS...

    Read the article

  • How to delegate SwingWorker's publish to other methods

    - by Savvas Dalkitsis
    My "problem" can be described by the following. Assume we have an intensive process that we want to have running in the background and have it update a Swing JProgress bar. The solution is easy: import java.util.List; import javax.swing.JOptionPane; import javax.swing.JProgressBar; import javax.swing.SwingWorker; /** * @author Savvas Dalkitsis */ public class Test { public static void main(String[] args) { final JProgressBar progressBar = new JProgressBar(0,99); SwingWorker<Void, Integer> w = new SwingWorker<Void, Integer>(){ @Override protected void process(List<Integer> chunks) { progressBar.setValue(chunks.get(chunks.size()-1)); } @Override protected Void doInBackground() throws Exception { for (int i=0;i<100;i++) { publish(i); Thread.sleep(300); } return null; } }; w.execute(); JOptionPane.showOptionDialog(null, new Object[] { "Process", progressBar }, "Process", JOptionPane.OK_CANCEL_OPTION, JOptionPane.QUESTION_MESSAGE, null, null, null); } } Now assume that i have various methods that take a long time. For instance we have a method that downloads a file from a server. Or another that uploads to a server. Or anything really. What is the proper way of delegating the publish method to those methods so that they can update the GUI appropriately? What i have found so far is this (assume that the method "aMethod" resides in some other package for instance): import java.awt.event.ActionEvent; import java.util.List; import javax.swing.AbstractAction; import javax.swing.Action; import javax.swing.JOptionPane; import javax.swing.JProgressBar; import javax.swing.SwingWorker; /** * @author Savvas Dalkitsis */ public class Test { public static void main(String[] args) { final JProgressBar progressBar = new JProgressBar(0,99); SwingWorker<Void, Integer> w = new SwingWorker<Void, Integer>(){ @Override protected void process(List<Integer> chunks) { progressBar.setValue(chunks.get(chunks.size()-1)); } @SuppressWarnings("serial") @Override protected Void doInBackground() throws Exception { aMethod(new AbstractAction() { @Override public void actionPerformed(ActionEvent e) { publish((Integer)getValue("progress")); } }); return null; } }; w.execute(); JOptionPane.showOptionDialog(null, new Object[] { "Process", progressBar }, "Process", JOptionPane.OK_CANCEL_OPTION, JOptionPane.QUESTION_MESSAGE, null, null, null); } public static void aMethod (Action action) { for (int i=0;i<100;i++) { action.putValue("progress", i); action.actionPerformed(null); try { Thread.sleep(300); } catch (InterruptedException e) { e.printStackTrace(); } } } } It works but i know it lacks something. Any thoughts?

    Read the article

  • PulpCore music playback - loop sound and animate volume

    - by Peter Perhác
    I have been experimenting with PulpCore, trying to create my own tower defence game (not-playable yet), and I am enjoying it very much I ran into a problem that I can't quite figure out. I extended PulpCore with the JOrbis thing to allow OGG files to be played. Works fine. However, pulpCore seems to have a problem with looping the sound WHILE animating the volume level. I tried this with wav file too, to make sure it isn't jOrbis that breaks it. The code is like this: Sound bgMusic = Sound.load("music/music.ogg"); Playback musicPlayback; ... musicVolume = new Fixed(0.75); musicPlayback = bgMusic.loop(musicVolume); //TODO figure out why it's NOT looping when volume is animated // musicVolume.animate(0, musicVolume.get(), FADE_IN_TIME); This code, for as long as the last line is commented out, plays the music.ogg again and again in an endless loop (which I can stop by calling stop on the Playback object returned from loop(). However, I would like the music to fade in smoothly, so following the advice of the PulpCore API docs, I added the last line which will create the fade-in but the music will only play once and then stop. I wonder why is that? Here is a bit of the documentation: Playback pulpcore.sound.Sound.loop(Fixed level) Loops this sound clip with the specified volume level (0.0 to 1.0). The level may have a property animation attached. Parameters: level Returns: a Playback object for this unique sound playback (one Sound can have many simultaneous Playback objects) or null if the sound could not be played. So what could be the problem? I repeat, with the last line, the sound fades in but doesn't loop, without it it loops but starts with the specified 0.75 volume level. Why can't I animate the volume of the looped music playback? What am I doing wrong? Anyone has any experience with pulpCore and has come across this problem? Anyone could please download PulpCore and try to loop music which fades-in (out)? note: I need to keep a reference to the Playback object returned so I can kill music later.

    Read the article

  • Marshalling polymorphic objects in JAX-WS

    - by pkchukiss
    I'm creating a JAX-WS type webservice, with operations that return an object WebServiceReply. The class WebServiceReply itself contains a field of type Object. The individual operations would populate that field with a few different data-types, depending on the operation. Publishing the WSDL (I'm using Netbeans 6.7), and getting a ASP.NET application to retrieve and parse the WSDL was fine, but when I tried to call an operation, I would receive the following exception: javax.xml.ws.WebServiceException: javax.xml.bind.MarshalException - with linked exception: [javax.xml.bind.JAXBException: class [LDataObject.Patient; nor any of its super class is known to this context.] How do I mark the annotations in the DataObject.Patient class, as well as the WebServiceReply class to get it to work? I haven't been able to fine a definitive resource on marshalling based upon annotations within the target classes either, so it would be great if anybody could point me to that too. WebServiceReply.java @XmlRootElement(name="WebServiceReply") public class WebServiceReply { private Object returnedObject; private String returnedType; private String message; private String errorMessage; .......... // Getters and setters follow } DataObject.Patient.java @XmlRootElement(name="Patient") public class Patient { private int uid; private Date versionDateTime; private String name; private String identityNumber; private List<Address> addressList; private List<ContactNumber> contactNumberList; private List<Appointment> appointmentList; private List<Case> caseList; } Solution (Thanks to Gregory Mostizky for his answer) I edited the WebServiceReply class so that all the possible return objects extend from a new class ReturnValueBase, and added the annotations using @XmlSeeAlso to ReturnValueBase. JAXB worked properly after that! Nonetheless, I'm still learning about JAXB marshalling in JAX-WS, so it would be great if anyone can still post any tutorial on this. Gregory: you might want to add-on to your answer that the return objects need to sub-class from ReturnValueBase. Thanks a lot for your help! I had been going bonkers over this problem for so long!

    Read the article

< Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >