Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 722/906 | < Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >

  • Java Random Slowdowns on Mac OS cont'd

    - by javajustice
    I asked this question a few weeks ago, but I'm still having the problem and I have some new hints. The original question is here: http://stackoverflow.com/questions/1651887/java-random-slowdowns-on-mac-os Basically, I have a java application that splits a job into independent pieces and runs them in separate threads. The threads have no synchronization or shared memory items. The only resources they do share are data files on the hard disk, with each thread having an open file channel. Most of the time it runs very fast, but occasionally it will run very slow for no apparent reason. If I attach a CPU profiler to it, then it will start running quickly again. If I take a CPU snapshot, it says its spending most of its time in "self time" in a function that doesn't do anything except check a few (unshared unsynchronized) booleans. I don't know how this could be accurate because 1, it makes no sense, and 2, attaching the profiler seems to knock the threads out of whatever mode they're in and fix the problem. Also, regardless of whether it runs fast or slow, it always finishes and gives the same output, and it never dips in total cpu usage (in this case ~1500%), implying that the threads aren't getting blocked. I have tried different garbage collectors, different sizings the parts of the memory space, writing data output to non-raid drives, and putting all data output in threads separate the main worker threads. Does anyone have any idea what kind of problem this could be? Could it be the operating system (OS X 10.6.2) ? I have not been able to duplicate it on a windows machine, but I don't have one with a similar hardware configuration.

    Read the article

  • Testing Broadcasting and receiving messages

    - by Avik
    Guys am having some difficulty figuring this out: I am trying to test whether the code(in c#) to broadcast a message and receiving the message works: The code to send the datagram(in this case its the hostname) is: public partial class Form1 : Form { String hostName; byte[] hostBuffer = new byte[1024]; public Form1() { InitializeComponent(); StartNotification(); } public void StartNotification() { IPEndPoint notifyIP = new IPEndPoint(IPAddress.Broadcast, 6000); hostName = Dns.GetHostName(); hostBuffer = Encoding.ASCII.GetBytes(hostName); UdpClient newUdpClient = new UdpClient(); newUdpClient.Send(hostBuffer, hostBuffer.Length, notifyIP); } } And the code to receive the datagram is: public partial class Form1 : Form { byte[] receivedNotification = new byte[1024]; String notificationReceived; StringBuilder listBox; UdpClient udpServer; IPEndPoint remoteEndPoint; public Form1() { InitializeComponent(); udpServer = new UdpClient(new IPEndPoint(IPAddress.Any, 1234)); remoteEndPoint=null; startUdpListener1(); } public void startUdpListener1() { receivedNotification = udpServer.Receive(ref remoteEndPoint); notificationReceived = Encoding.ASCII.GetString(receivedNotification); listBox = new StringBuilder(this.listBox1.Text); listBox.AppendLine(notificationReceived); this.listBox1.Items.Add(listBox.ToString()); } } For the reception of the code I have a form that has only a listbox(listBox1). The problem here is that when i execute the code to receive, the program runs but the form isnt visible. However when I comment the function call( startUdpListener1() ), the purpose isnt served but the form is visible. Whats going wrong?

    Read the article

  • Message queue proxy in Python + Twisted

    - by gasper_k
    Hi, I want to implement a lightweight Message Queue proxy. It's job is to receive messages from a web application (PHP) and send them to the Message Queue server asynchronously. The reason for this proxy is that the MQ isn't always avaliable and is sometimes lagging, or even down, but I want to make sure the messages are delivered, and the web application returns immediately. So, PHP would send the message to the MQ proxy running on the same host. That proxy would save the messages to SQLite for persistence, in case of crashes. At the same time it would send the messages from SQLite to the MQ in batches when the connection is available, and delete them from SQLite. Now, the way I understand, there are these components in this service: message listener (listens to the messages from PHP and writes them to a Incoming Queue) DB flusher (reads messages from the Incoming Queue and saves them to a database; due to SQLite single-threadedness) MQ connection handler (keeps the connection to the MQ server online by reconnecting) message sender (collects messages from SQlite db and sends them to the MQ server, then removes them from db) I was thinking of using Twisted for #1 (TCPServer), but I'm having problem with integrating it with other points, which aren't event-driven. Intuition tells me that each of these points should be running in a separate thread, because all are IO-bound and independent of each other, but I could easily put them in a single thread. Even though, I couldn't find any good and clear (to me) examples on how to implement this worker thread aside of Twisted's main loop. The example I've started with is the chatserver.py, which uses service.Application and internet.TCPServer objects. If I start my own thread prior to creating TCPServer service, it runs a few times, but the it stops and never runs again. I'm not sure, why this is happening, but it's probably because I don't use threads with Twisted correctly. Any suggestions on how to implement a separate worker thread and keep Twisted? Do you have any alternative architectures in mind?

    Read the article

  • How to get application context path in spring-ws?

    - by Dhaliwal
    I am using Spring-WS to create a webservice. In my project, I have created a Helper class to reads sample response and request xml file which are located in my /src/main/resource folder. When I am unit-testing my webservice application 'locally', I use the System.getProperty("user.dir") to get the application context folder. The following is a method that I created in the Helper class to help me retrieve the file that I am interested in my resource folder. public static File getFileFromResources(String filename) { System.out.println("Getting file from resource folder"); File request = null; String curDir = System.getProperty("user.dir"); String contextpath = "C:\\src\\main\\resources\\"; request = new File(curDir + contextpath + filename); return request; } However, after 'publishing' the compiled WAR file to the ../webapps folder to the Apache Tomcat directory, I realise that System.getProperty("user.dir") no longer returns my application context path. Instead, it is returning the Apache Tomcat root directory as shown C:\Program Files\Apache Software Foundation\Tomcat 6.0\src\main\resources\SampleClientFile I cant seem to find any information about getting the root folder of my webservice. I have seen examples on Spring web application where I can retrieve the context path by using the following : request.getSession().getServletContext().getContextPath() But in this case, I am using a Spring web application where there is a servlet context in the request. But the Spring-WS, my entry point is an endpoint. How can I get the context path of my webservice application. I am expecting a context path of something like C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\clientWebService\WEB-INF\classes Could someone suggest a way to achieve this?

    Read the article

  • With NHibernate and Transaction do I rollback on commit failure or does it auto rollback on single c

    - by mattcodes
    I've built the following Dispose method for my Unit Of Work which essentially wraps the active NH session & transaction (transaction set as variable after opening session as to not be replaced if NH session gets new transaction after error) public void Dispose() { Func<ITransaction,bool> transactionStateOkayFunc = trans => trans != null && trans.IsActive && !trans.WasRolledBack; try { if(transactionStateOkayFunc(this.transaction)) { if (HasErrored) { transaction.Rollback(); } else { try { transaction.Commit(); } catch (Exception) { if(transactionStateOkayFunc(transaction)) transaction.Rollback(); throw; } } } } finally { if(transaction != null) transaction.Dispose(); if(session.IsOpen) session.Close(); } I can't help feeling that code is a little bloated, will a transaction automatically rollback is a discrete Commit fails in the case of non-nested transactions? Will Commit or Rollback automatically Dipose the transaction? If not will Session.Close() automatically dispose the associated transaction?

    Read the article

  • Testing for interface implementation in WCF/SOA

    - by rabidpebble
    I have a reporting service that implements a number of reports. Each report requires certain parameters. Groups of logically related parameters are placed in an interface, which the report then implements: [ServiceContract] [ServiceKnownType(typeof(ExampleReport))] public interface IService1 { [OperationContract] void Process(IReport report); } public interface IReport { string PrintedBy { get; set; } } public interface IApplicableDateRangeParameter { DateTime StartDate { get; set; } DateTime EndDate { get; set; } } [DataContract] public abstract class Report : IReport { [DataMember] public string PrintedBy { get; set; } } [DataContract] public class ExampleReport : Report, IApplicableDateRangeParameter { [DataMember] public DateTime StartDate { get; set; } [DataMember] public DateTime EndDate { get; set; } } The problem is that the WCF DataContractSerializer does not expose these interfaces in my client library, thus I can't write the generic report generating front-end that I plan to. Can WCF expose these interfaces, or is this a limitation of the serializer? If the latter case, then what is the canonical approach to this OO pattern? I've looked into NetDataContractSerializer but it doesn't seem to be an officially supported implementation (which means it's not an option in my project). Currently I've resigned myself to including the interfaces in a library that is common between the service and the client application, but this seems like an unnecessary extra dependency to me. Surely there is a more straightforward way to do this? I was under the impression that WCF was supposed to replace .NET remoting; checking if an object implements an interface seems to be one of the most basic features required of a remoting interface?

    Read the article

  • Passing IDisposable objects through constructor chains

    - by Matt Enright
    I've got a small hierarchy of objects that in general gets constructed from data in a Stream, but for some particular subclasses, can be synthesized from a simpler argument list. In chaining the constructors from the subclasses, I'm running into an issue with ensuring the disposal of the synthesized stream that the base class constructor needs. Its not escaped me that the use of IDisposable objects this way is possibly just dirty pool (plz advise?) for reasons I've not considered, but, this issue aside, it seems fairly straightforward (and good encapsulation). Codes: abstract class Node { protected Node (Stream raw) { // calculate/generate some base class properties } } class FilesystemNode : Node { public FilesystemNode (FileStream fs) : base (fs) { // all good here; disposing of fs not our responsibility } } class CompositeNode : Node { public CompositeNode (IEnumerable some_stuff) : base (GenerateRaw (some_stuff)) { // rogue stream from GenerateRaw now loose in the wild! } static Stream GenerateRaw (IEnumerable some_stuff) { var content = new MemoryStream (); // molest elements of some_stuff into proper format, write to stream content.Seek (0, SeekOrigin.Begin); return content; } } I realize that not disposing of a MemoryStream is not exactly a world-stopping case of bad CLR citizenship, but it still gives me the heebie-jeebies (not to mention that I may not always be using a MemoryStream for other subtypes). It's not in scope, so I can't explicitly Dispose () it later in the constructor, and adding a using statement in GenerateRaw () is self-defeating since I need the stream returned. Is there a better way to do this? Preemptive strikes: yes, the properties calculated in the Node constructor should be part of the base class, and should not be calculated by (or accessible in) the subclasses I won't require that a stream be passed into CompositeNode (its format should be irrelevant to the caller) The previous iteration had the value calculation in the base class as a separate protected method, which I then just called at the end of each subtype constructor, moved the body of GenerateRaw () into a using statement in the body of the CompositeNode constructor. But the repetition of requiring that call for each constructor and not being able to guarantee that it be run for every subtype ever (a Node is not a Node, semantically, without these properties initialized) gave me heebie-jeebies far worse than the (potential) resource leak here does.

    Read the article

  • JPA joined column allow every value...

    - by Fabio Beoni
    I'm testing JPA, in a simple case File/FileVersions tables (Master/Details), with OneToMany relation, I have this problem: in FileVersions table, the field "file_id" (responsable for the relation with File table) accepts every values, not only values from File table. How can I use the JPA mapping to limit the input in FileVersion.file_id only for values existing in File.id? My class are File and FileVersion: FILE CLASS @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name="FILE_ID") private Long id; @Column(name="NAME", nullable = false, length = 30) private String name; //RELATIONS ------------------------------------------- @OneToMany(mappedBy="file", fetch=FetchType.EAGER) private Collection <FileVersion> fileVersionsList; //----------------------------------------------------- FILEVERSION CLASS @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name="VERSION_ID") private Long id; @Column(name="FILENAME", nullable = false, length = 255) private String fileName; @Column(name="NOTES", nullable = false, length = 200) private String notes; //RELATIONS ------------------------------------------- @ManyToOne(fetch=FetchType.EAGER) @JoinColumn(name="FILE_ID", referencedColumnName="FILE_ID", nullable=false) private File file; //----------------------------------------------------- and this is the FILEVERSION TABLE CREATE TABLE `JPA-Support`.`FILEVERSION` ( `VERSION_ID` bigint(20) NOT NULL AUTO_INCREMENT, `FILENAME` varchar(255) NOT NULL, `NOTES` varchar(200) NOT NULL, `FILE_ID` bigint(20) NOT NULL, PRIMARY KEY (`VERSION_ID`), KEY `FK_FILEVERSION_FILE_ID` (`FILE_ID`) ) ENGINE=MyISAM AUTO_INCREMENT=4 DEFAULT CHARSET=latin1

    Read the article

  • Searching a column containing CSV data in a MySQL table for existence of input values

    - by Adarsh R
    Hi, I have a table say, ITEM, in MySQL that stores data as follows: ID FEATURES -------------------- 1 AB,CD,EF,XY 2 PQ,AC,A3,B3 3 AB,CDE 4 AB1,BC3 -------------------- As an input, I will get a CSV string, something like "AB,PQ". I want to get the records that contain AB or PQ. I realized that we've to write a MySQL function to achieve this. So, if we have this magical function MATCH_ANY defined in MySQL that does this, I would then simply execute an SQL as follows: select * from ITEM where MATCH_ANY(FEAURES, "AB,PQ") = 0 The above query would return the records 1, 2 and 3. But I'm running into all sorts of problems while implementing this function as I realized that MySQL doesn't support arrays and there's no simple way to split strings based on a delimiter. Remodeling the table is the last option for me as it involves lot of issues. I might also want to execute queries containing multiple MATCH_ANY functions such as: select * from ITEM where MATCH_ANY(FEATURES, "AB,PQ") = 0 and MATCH_ANY(FEATURES, "CDE") In the above case, we would get an intersection of records (1, 2, 3) and (3) which would be just 3. Any help is deeply appreciated. Thanks

    Read the article

  • Why this code generates different numbers?

    - by frbry
    Hello, I have this function that creates a unique number for hard-disk and CPU combination. DWORD hw_hash() { char drv[4]; char szNameBuffer[256]; DWORD dwHddUnique; DWORD dwProcessorUnique; DWORD dwUniqueKey; char *sysDrive = getenv ("SystemDrive"); strcpy(drv, sysDrive); drv[2] = '\\'; drv[3] = 0; GetVolumeInformation(drv, szNameBuffer, 256, &dwHddUnique, NULL, NULL, NULL, NULL); SYSTEM_INFO si; GetSystemInfo(&si); dwProcessorUnique = si.dwProcessorType + si.wProcessorArchitecture + si.wProcessorRevision; dwUniqueKey = dwProcessorUnique + dwHddUnique; return dwUniqueKey; } It returns different numbers if I format my hard-disk and install a new Windows. Any ideas, why? Thank you. Edit: OK, Got it: This function returns the volume serial number that the operating system assigns when a hard disk is formatted. To programmatically obtain the hard disk's serial number that the manufacturer assigns, use the Windows Management Instrumentation (WMI) Win32_PhysicalMedia property SerialNumber. I should do more research before posting my problems online. Sorry to bother you, let's keep this here in case anybody else can need it.

    Read the article

  • Why thread in background is not waiting for task to complete?

    - by Haris Hasan
    I am playing with async await feature of C#. Things work as expected when I use it with UI thread. But when I use it in a non-UI thread it doesn't work as expected. Consider the code below private void Click_Button(object sender, RoutedEventArgs e) { var bg = new BackgroundWorker(); bg.DoWork += BgDoWork; bg.RunWorkerCompleted += BgOnRunWorkerCompleted; bg.RunWorkerAsync(); } private void BgOnRunWorkerCompleted(object sender, RunWorkerCompletedEventArgs runWorkerCompletedEventArgs) { } private async void BgDoWork(object sender, DoWorkEventArgs doWorkEventArgs) { await Method(); } private static async Task Method() { for (int i = int.MinValue; i < int.MaxValue; i++) { var http = new HttpClient(); var tsk = await http.GetAsync("http://www.ebay.com"); } } When I execute this code, background thread don't wait for long running task in Method to complete. Instead it instantly executes the BgOnRunWorkerCompleted after calling Method. Why is that so? What am I missing here? P.S: I am not interested in alternate ways or correct ways of doing this. I want to know what is actually happening behind the scene in this case? Why is it not waiting?

    Read the article

  • Using PLINQ to calculate and update values within the enclosure does not work

    - by Keith
    I recently needed to do a running total on a report. Where for each group, I order the rows and then calculate the running total based on the previous rows within the group. Aha! I thought, a perfect use case for PLINQ! However, when I wrote up the code, I got some strange behavior. The values I was modifying showed as being modified when stepping through the debugger, but when they were accessed they were always zero. Sample Code: class Item { public int PortfolioID; public int TAAccountID; public DateTime TradeDate; public decimal Shares; public decimal RunningTotal; } List<Item> itemList = new List<Item> { new Item { PortfolioID = 1, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 1), Shares = 5.335m, }, new Item { PortfolioID = 1, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 2), Shares = -2.335m, }, new Item { PortfolioID = 2, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 1), Shares = 7.335m, }, new Item { PortfolioID = 2, TAAccountID = 1, TradeDate = new DateTime(2010, 5, 2), Shares = -3.335m, }, }; var found = (from i in itemList where i.TAAccountID == 1 select new Item { TAAccountID = i.TAAccountID, PortfolioID = i.PortfolioID, Shares = i.Shares, TradeDate = i.TradeDate, RunningTotal = 0 }); found.AsParallel().ForAll(x => { var prevItems = found.Where(i => i.PortfolioID == x.PortfolioID && i.TAAccountID == x.TAAccountID && i.TradeDate <= x.TradeDate); x.RunningTotal = prevItems.Sum(s => s.Shares); }); foreach (Item i in found) { Console.WriteLine("Running total: {0}", i.RunningTotal); } Console.ReadLine(); If I change the select for found to be .ToArray(), then it works fine and I get calculated reuslts. Any ideas what I am doing wrong?

    Read the article

  • Ditching Django's models for Ajax/Web Services

    - by Igor Ganapolsky
    Recently I came across a problem at work that made me rethink Django's models. The app I am developing resides on a Linux server. Its a simple model/view/controller app that involves user interaction and updating data in the database. The problem is that this data resides in a MS SQL database on a Windows machine. So in order to use Django's models, I would have to leverage an ODBC driver on linux, and the use a python add-on like pyodbc. Well, let me tell you, setting up a reliable and functional ODBC connection on linux is no easy feat! So much so, that I spent several hours maneuvering this on my CentOS with no luck, and was left with frustration and lots of dumb system errors. In the meantime I have a deadline to meet, and suddenly the very agile and rapid Django application is a roadblock rather than a pleasure to work with. Someone on my team suggested writing this app in .NET. But there are a few problems with that: it won't be deployable on a linux machine, and I won't be able to work on it since I don't know ASP.net. Then a much better suggestion was made: keep the app in django, but instead of using models, do straight up ajax/web services calls in the template. And then it dawned on me - what a great idea. Django's models seem like a nuissance and hindrance in this case, and I can just have someone else write .Net services on their side, that I can call from my template. As a result my app will be leaner and more compact. So, I was wondering if you guys ever came across a similar dillema and what you decided to do about it.

    Read the article

  • How can I superimpose modified loess lines on a ggplot2 qplot?

    - by briandk
    Background Right now, I'm creating a multiple-predictor linear model and generating diagnostic plots to assess regression assumptions. (It's for a multiple regression analysis stats class that I'm loving at the moment :-) My textbook (Cohen, Cohen, West, and Aiken 2003) recommends plotting each predictor against the residuals to make sure that: The residuals don't systematically covary with the predictor The residuals are homoscedastic with respect to each predictor in the model On point (2), my textbook has this to say: Some statistical packages allow the analyst to plot lowess fit lines at the mean of the residuals (0-line), 1 standard deviation above the mean, and 1 standard deviation below the mean of the residuals....In the present case {their example}, the two lines {mean + 1sd and mean - 1sd} remain roughly parallel to the lowess {0} line, consistent with the interpretation that the variance of the residuals does not change as a function of X. (p. 131) How can I modify loess lines? I know how to generate a scatterplot with a "0-line,": # First, I'll make a simple linear model and get its diagnostic stats library(ggplot2) data(cars) mod <- fortify(lm(speed ~ dist, data = cars)) attach(mod) str(mod) # Now I want to make sure the residuals are homoscedastic qplot (x = dist, y = .resid, data = mod) + geom_smooth(se = FALSE) # "se = FALSE" Removes the standard error bands But does anyone know how I can use ggplot2 and qplot to generate plots where the 0-line, "mean + 1sd" AND "mean - 1sd" lines would be superimposed? Is that a weird/complex question to be asking?

    Read the article

  • Constructor or Assignment Operator

    - by ju
    Can you help me is there definition in C++ standard that describes which one will be called constructor or assignment operator in this case: #include <iostream> using namespace std; class CTest { public: CTest() : m_nTest(0) { cout << "Default constructor" << endl; } CTest(int a) : m_nTest(a) { cout << "Int constructor" << endl; } CTest(const CTest& obj) { m_nTest = obj.m_nTest; cout << "Copy constructor" << endl; } CTest& operatorint rhs) { m_nTest = rhs; cout << "Assignment" << endl; return *this; } protected: int m_nTest; }; int _tmain(int argc, _TCHAR* argv[]) { CTest b = 5; return 0; } Or is it just a matter of compiler optimization?

    Read the article

  • Dynamically setting the queryset of a ModelMultipleChoiceField to a custom recordset

    - by Daniel Quinn
    I've seen all the howtos about how you can set a ModelMultipleChoiceField to use a custom queryset and I've tried them and they work. However, they all use the same paradigm: the queryset is just a filtered list of the same objects. In my case, I'm trying to get the admin to draw a multiselect form that instead of using usernames as the text portion of the , I'd like to use the name field from my account class. Here's a breakdown of what I've got: # models.py class Account(models.Model): name = models.CharField(max_length=128,help_text="A display name that people understand") user = models.ForeignKey(User, unique=True) # Tied to the User class in settings.py class Organisation(models.Model): administrators = models.ManyToManyField(User) # admin.py from django.forms import ModelMultipleChoiceField from django.contrib.auth.models import User class OrganisationAdminForm(forms.ModelForm): def __init__(self, *args, **kwargs): from ethico.accounts.models import Account self.base_fields["administrators"] = ModelMultipleChoiceField( queryset=User.objects.all(), required=False ) super(OrganisationAdminForm, self).__init__(*args, **kwargs) class Meta: model = Organisation This works, however, I want queryset above to draw a selectbox with the Account.name property and the User.id property. This didn't work: queryset=Account.objects.all().order_by("name").values_list("user","name") It failed with this error: 'tuple' object has no attribute 'pk' I figured that this would be easy, but it's turned into hours of dead-ends. Anyone care to shed some light?

    Read the article

  • UpdateModel not working consistently for me if a ddl gets hidden by jQuery

    - by awrigley
    Hi My jQuery code hides a ddl under certain circumstances. When this is the case, after submitting the form, using the UpdateModel doesn't seem to work consistently. My code in the controller: // POST: /IllnessDetail/Edit [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(IllnessDetail ill) { IllnessDetailFormViewModel mfv = new IllnessDetailFormViewModel(ill); if (ModelState.IsValid) { try { IllnessDetail sick = idr.GetLatestIllnessDetailByUsername(User.Identity.Name); UpdateModel(sick); idr.Save(); return RedirectToAction("Current", "IllnessDetail"); } catch { ModelState.AddRuleViolations(mfv.IllnessDetail.GetRuleViolations()); } } return View(new IllnessDetailFormViewModel(ill)); } I have only just started with MVC, and working under a deadline, so am still hazy as to how UpdateModel works. Debugging seems to reveal that the correct value is passed in to the action method: public ActionResult Edit(IllnessDetail ill) And the correct value is put into sick in the following line: IllnessDetailFormViewModel mfv = new IllnessDetailFormViewModel(ill); However, when all is said, done and returned to the client, what displays is the value of: sick.IdInfectiousAgent instead of the value of: ill.IdInfectiousAgent The only reason I can think of is that the ddlInfectiousAgent has been hidden by jQuery. Or am I barking up the wrong lamp post? Andrew

    Read the article

  • My android tests don't get internet access!

    - by Malachii
    The subject says it all. My application gets internet access thanks to the android.permission.INTERNET permission, but my test cases don't while using the instrumentation test runner. This means I can't test my server IO routines in my test cases. What's up? Here's my manifest in case it helps you. Thanks! Sorry about the lack of indents - could not get it working on short notice with this site. Thanks! <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.helloandroid" android:versionCode="1" android:versionName="1.0"> <uses-permission android:name="android.permission.INTERNET"></uses-permission> <application android:icon="@drawable/icon" android:label="@string/app_name"> <uses-library android:name="android.test.runner" /> <activity android:name=".HelloAndroid" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="2" /> <instrumentation android:name="android.test.InstrumentationTestRunner" android:targetPackage="qnext.mobile.redirect" android:label="Qnext Redirect Tests" /> </manifest>

    Read the article

  • How can I ceil the height of a div that has "display: table-cell"?

    - by eric01
    I am trying to vertically center a div inside another div, regardless of the size of both div's I am doing this: <div id='outer_div'> <div id='inner_div'></div> </div> The CSS is #outer_div { display: table-row; height: 200px; width: 200px } #inner_div { display: table-cell; vertical-align: middle; height: 500px; width: 500px } I put larger dimensions for the inner div on purpose. What happens is that even if I put a width of 1000px, the width of the innerdiv is ceiled at 200px because of the outer div's width (and this is what I want) But for the height, it is not ceiled and I would like it to be ceiled to the height of the outer div. What i want is the innerdiv to remain at the same size as the outer_div, no matter what height I give to outer_div in CSS. I basically want it to be the same size as its parent. EDIT: I only put text inside those div's. So let's say I have an outer div of 200px*200px (it has display: table-row), and the inner div is defined by css as 500px*500px and it has dummy text inside. My expected result is to have the inner div shrunk down to 200px*200px. It is successfully 'shrunk' by the outer div for the width, but NOT for the height. What I want is to have it shrunk on the height as well (so the inner div ajusts automatically in case I change the height of the outer div) How do I go about that?

    Read the article

  • How to define a "complicated" ComputedColumn in SQL Server?

    - by Slauma
    SQL Server Beginner question: I'm trying to introduce a computed column in SQL Server (2008). In the table designer of SQL Server Management Studio I can do this, but the designer only offers me one single edit cell to define the expression for this column. Since my computed column will be rather complicated (depending on several database fields and with some case differentiations) I'd like to have a more comfortable and maintainable way to enter the column definition (including line breaks for formatting and so on). I've seen there is an option to define functions in SQL Server (scalar value or table value functions). Is it perhaps better to define such a function and use this function as the column specification? And what kind of function (scalar value, table value)? To make a simplified example: I have two database columns: DateTime1 (smalldatetime, NULL) DateTime2 (smalldatetime, NULL) Now I want to define a computed column "Status" which can have four possible values. In Dummy language: if (DateTime1 IS NULL and DateTime2 IS NULL) set Status = 0 else if (DateTime1 IS NULL and DateTime2 IS NOT NULL) set Status = 1 else if (DateTime1 IS NOT NULL and DateTime2 IS NULL) set Status = 2 else set Status = 3 Ideally I would like to have a function GetStatus() which can access the different column values of the table row which I want to compute the value of "Status" for, and then only define the computed column specification as GetStatus() without parameters. Is that possible at all? Or what is the best way to work with "complicated" computed column definitions? Thank you for tips in advance!

    Read the article

  • m2eclipse: How to set Eclipse project settings when importing a maven project?

    - by Marius Andreiana
    Using m2eclipse Eclipse plugin, everybody on the dev team should be able to checkout source code, import Maven project in Eclipse and be good to go. I saw m2eclipse is being merged into Eclipse 3.7, and maven-eclipse-plugin won't be maintained any longer, so I'm looking for a m2eclipse-based solution (without running "mvn eclipse:clean eclipse:eclipse" before project import, which is what maven-eclipse-plugin does). maven-eclipse-plugin allows this in pom.xml <additionalConfig> <file> <name>.settings/com.google.gdt.eclipse.core.prefs</name> <content><![CDATA[ eclipse.preferences.version=2 jarsExcludedFromWebInfLib= warSrcDir=${project.build.directory}/${project.build.finalName} warSrcDirIsOutput=true ]]> </content> </file> The more general question is How would m2eclipse do something similar? For some cases, just saving the eclipse .settings/prefs file works (e.g. org.eclipse.jdt.ui.prefs), but in this case, com.google.gdt.eclipse.core.prefs is always overwritten on m2eclipse project import. A specific question is asked here, with no reply. Thanks! UPDATE: Not possible now, see request

    Read the article

  • Is there a standard syntax for encoding structure objects as HTTP GET request parameters?

    - by lexicore
    Imagine we need to pass a a number structured objects to the web application - for instance, locale, layout settings and a definition of some query. This can be easily done with JSON or XML similar to the following fragment: <Locale>en</Locale> <Layout> <Block id="header">hide</Block> <Block id="footer">hide</Block> <Block id="navigation">minimize</Block> </Layout> <Query> <What>water</What> <When> <Start>2010-01-01</Start> </When> </Query> However, passing such structures with HTTP implies (roughly speaking) HTTP POST. Now assume we're limited to HTTP GET. Is there some kind of a standard solution for encoding structured data in HTTP GET request parameters? I can easily imagine something like: Locale=en& Layout.Block.header=hide& Layout.Block.footer=hide& Layout.Block.navigation=minimize& Query.What=water& Query.When.Start=2010-01-01 But what I'm looking for is a "standard" syntax, if there's any. ps. I'm surely aware of the problem with URL length. Please assume that it's not a problem in this case.

    Read the article

  • Simple encryption - Sum of Hashes in C

    - by Dogbert
    I am attempting to demonstrate a simple proof of concept with respect to a vulnerability in a piece of code in a game written in C. Let's say that we want to validate a character login. The login is handled by the user choosing n items, (let's just assume n=5 for now) from a graphical menu. The items are all medieval themed: eg: _______________________________ | | | | | Bow | Sword | Staff | |-----------|-----------|-------| | Shield | Potion | Gold | |___________|___________|_______| The user must click on each item, then choose a number for each item. The validation algorithm then does the following: Determines which items were selected Drops each string to lowercase (ie: Bow becomes bow, etc) Calculates a simple string hash for each string (ie: `bow = b=2, o=15, w=23, sum = (2+15+23=40) Multiplies the hash by the value the user selected for the corresponding item; This new value is called the key Sums together the keys for each of the selected items; this is the final validation hash IMPORTANT: The validator will accept this hash, along with non-zero multiples of it (ie: if the final hash equals 1111, then 2222, 3333, 8888, etc are also valid). So, for example, let's say I select: Bow (1) Sword (2) Staff (10) Shield (1) Potion (6) The algorithm drops each of these strings to lowercase, calculates their string hashes, multiplies that hash by the number selected for each string, then sums these keys together. eg: Final_Validation_Hash = 1*HASH(Bow) + 2*HASH(Sword) + 10*HASH(Staff) + 1*HASH(Shield) + 6*HASH(Potion) By application of Euler's Method, I plan to demonstrate that these hashes are not unique, and want to devise a simple application to prove it. in my case, for 5 items, I would essentially be trying to calculate: (B)(y) = (A_1)(x_1) + (A_2)(x_2) + (A_3)(x_3) + (A_4)(x_4) + (A_5)(x_5) Where: B is arbitrary A_j are the selected coefficients/values for each string/category x_j are the hash values for each string/category y is the final validation hash (eg: 1111 above) B,y,A_j,x_j are all discrete-valued, positive, and non-zero (ie: natural numbers) Can someone either assist me in solving this problem or point me to a similar example (ie: code, worked out equations, etc)? I just need to solve the final step (ie: (B)(Y) = ...). Thank you all in advance.

    Read the article

  • Erlang ODBC parameter query with null parameters

    - by Schlomer
    Is it possible to pass null values to parameter queries? For example Sql = "insert into TableX values (?,?)". Params = [{sql_integer, [Val1]}, {sql_float, [Val2]}]. % Val2 may be a float, or it may be the atom, undefined odbc:param_query(OdbcRef, Sql, Params). Now, of course odbc:param_query/3 is going to complain if Val2 is undefined when trying to match to a sql_float, but my question is... Is it possible to use a parameterized query, such as: Sql = "insert into TableY values (?,?,?,?,?,?,?,?,?)". with any null parameters? I have a use case where I am dumping a large number of real-time data into a database by either inserting or updating. Some of the tables I am updating have a dozen or so nullable fields, and I do not have a guarantee that all of the data will be there. Concatenating a SQL together for each query, checking for null values seems complex, and the wrong way to do it. Having a parameterized query for each permutation is simply not an option. Any thoughts or ideas would be fantastic! Thank you!

    Read the article

  • Are finalizers ever allowed to call other managed classes' methods?

    - by romkyns
    I used to be pretty sure the answer is "no", as explained in Overriding the Finalize method and Object.Finalize documentation. However, while randomly browsing through FileStream in Reflector, I found that it can actually call just such a method from a finalizer: private SafeFileHandle _handle; ~FileStream() { if (this._handle != null) { this.Dispose(false); } } protected override void Dispose(bool disposing) { try { ... } finally { if ((this._handle != null) && !this._handle.IsClosed) // <=== HERE { this._handle.Dispose(); // <=== AND HERE } [...] } } I started wondering whether this will always work due to the exact way in which it's written, and hence whether the "do not touch managed classes from finalizers" is just a guideline that can be broken given a good reason and the necessary knowledge to do it right. I dug a bit deeper and found out that the worst that can happen when the "rule" is broken is that the managed object being accessed had already been finalized, or may be getting finalized in parallel on a separate thread. So if the SafeFileHandle's finalizer didn't do anything that would cause a subsequent call to Dispose fail then the above should be fine... right? Question: so there might after all be situations in which a method on another managed class may be called reliably from a finalizer? I've always believed this to be false, but this code suggests that it's possible and that there can be good enough reasons to do it. Bonus: Observe that the SafeFileHandle will not even know it's being called from a finalizer, since this is just a normal call to Dispose(). The base class, SafeHandle, actually has two private methods, InternalDispose and InternalFinalize, and in this case InternalDispose will be called. Isn't this a problem? Why not?...

    Read the article

< Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >