Search Results

Search found 13469 results on 539 pages for 'avoid trouble'.

Page 449/539 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

  • How to restrict access to a class's data based on state?

    - by Marcus Swope
    In an ETL application I am working on, we have three basic processes: Validate and parse an XML file of customer information from a third party Match values received in the file to values in our system Load customer data in our system The issue here is that we may need to display the customer information from any or all of the above states to an internal user and there is data in our customer class that will never be populated before the values have been matched in our system (step 2). For this reason, I would like to have the values not even be available to be accessed when the customer is in this state, and I would like to have to avoid some repeated logic everywhere like: if (customer.IsMatched) DisplayTextOnWeb(customer.SomeMatchedValue); My first thought for this was to add a couple interfaces on top of Customer that would only expose the properties and behaviors of the current state, and then only deal with those interfaces. The problem with this approach is that there seems to be no good way to move from an ICustomerWithNoMatchedValues to an ICustomerWithMatchedValues without doing direct casts, etc... (or at least I can't find one). I can't be the first to have come across this, how do you normally approach this? As a last caveat, I would like for this solution to play nice with FluentNHibernate :) Thanks in advance...

    Read the article

  • using the ASP.NET Caching API via method annotations in C#

    - by craigmoliver
    In C#, is it possible to decorate a method with an annotation to populate the cache object with the return value of the method? Currently I'm using the following class to cache data objects: public class SiteCache { // 7 days + 6 hours (offset to avoid repeats peak time) private const int KeepForHours = 174; public static void Set(string cacheKey, Object o) { if (o != null) HttpContext.Current.Cache.Insert(cacheKey, o, null, DateTime.Now.AddHours(KeepForHours), TimeSpan.Zero); } public static object Get(string cacheKey) { return HttpContext.Current.Cache[cacheKey]; } public static void Clear(string sKey) { HttpContext.Current.Cache.Remove(sKey); } public static void Clear() { foreach (DictionaryEntry item in HttpContext.Current.Cache) { Clear(item.Key.ToString()); } } } In methods I want to cache I do this: [DataObjectMethod(DataObjectMethodType.Select)] public static SiteSettingsInfo SiteSettings_SelectOne_Name(string Name) { var ck = string.Format("SiteSettings_SelectOne_Name-Name_{0}-", Name.ToLower()); var dt = (DataTable)SiteCache.Get(ck); if (dt == null) { dt = new DataTable(); dt.Load(ModelProvider.SiteSettings_SelectOne_Name(Name)); SiteCache.Set(ck, dt); } var info = new SiteSettingsInfo(); foreach (DataRowView dr in dt.DefaultView) info = SiteSettingsInfo_Load(dr); return info; } Is it possible to separate those concerns like so: (notice the new annotation) [CacheReturnValue] [DataObjectMethod(DataObjectMethodType.Select)] public static SiteSettingsInfo SiteSettings_SelectOne_Name(string Name) { var dt = new DataTable(); dt.Load(ModelProvider.SiteSettings_SelectOne_Name(Name)); var info = new SiteSettingsInfo(); foreach (DataRowView dr in dt.DefaultView) info = SiteSettingsInfo_Load(dr); return info; }

    Read the article

  • Safe way to set computed environment variables

    - by sfink
    I have a bash script that I am modifying to accept key=value pairs from stdin. (It is spawned by xinetd.) How can I safely convert those key=value pairs into environment variables for subprocesses? I plan to only allow keys that begin with a predefined prefix "CMK_", to avoid IFS or any other "dangerous" variable getting set. But the simplistic approach function import () { local IFS="=" while read key val; do case "$key" in CMK_*) eval "$key=$val";; esac done } is horribly insecure because $val could contain all sorts of nasty stuff. This seems like it would work: shopt -s extglob function import () { NORMAL_IFS="$IFS" local IFS="=" while read key val; do case "$key" in CMK_*([a-zA-Z_]) ) IFS="$NORMAL_IFS" eval $key='$val' IFS="=" ;; esac done } but (1) it uses the funky extglob thing that I've never used before, and (2) it's complicated enough that I can't be comfortable that it's secure. My goal, to be specific, is to allow key=value settings to pass through the bash script into the environment of called processes. It is up to the subprocesses to deal with potentially hostile values getting set. I am modifying someone else's script, so I don't want to just convert it to Perl and be done with it. I would also rather not change it around to invoke the subprocesses differently, something like #!/bin/sh ...start of script... perl -nle '($k,$v)=split(/=/,$_,2); $ENV{$k}=$v if $k =~ /^CMK_/; END { exec("subprocess") }' ...end of script...

    Read the article

  • Classification: Dealing with Abstain/Rejected Class

    - by abner.ayala
    I am asking for your input and/help on a classification problem. If anyone have any references that I can read to help me solve my problem even better. I have a classification problem of four discrete and very well separated classes. However my input is continuous and has a high frequency (50Hz), since its a real-time problem. The circles represent the clusters of the classes, the blue line the decision boundary and Class 5 equals the (neutral/resting do nothing class). This class is the rejected class. However the problem is that when I move from one class to the other I activate a lot of false positives in the transition movements, since the movement is clearly non-linear. For example, every time I move from class 5 (neutral class) to 1 I first see a lot of 3's before getting to the 1 class. Ideally, I will want my decision boundary to look like the one in the picture below where the rejected class is Class =5. Has a higher decision boundary than the others classes to avoid misclassification during transition. I am currently implementing my algorithm in Matlab using naive bayes, kNN, and SVMs optimized algorithms using Matlab. Question: What is the best/common way to handle abstain/rejected classes classes? Should I use (fuzzy logic, loss function, should I include resting cluster in the training)?

    Read the article

  • Using variables inside macros in SQL

    - by Tim
    Hello I'm wanting to use variables inside my macro SQL on Teradata. I thought I could do something like the following: REPLACE MACRO DbName.MyMacro ( MacroNm VARCHAR(50) ) AS ( /* Variable to store last time the macro was run */ DECLARE V_LAST_RUN_DATE TIMESTAMP; /* Get last run date and store in V_LAST_RUN_DATE */ SELECT LastDate INTO V_LAST_RUN_DATE FROM DbName.RunLog WHERE MacroNm = :MacroNm; /* Update the last run date to now and save the old date in history */ EXECUTE MACRO DbName.RunLogUpdater( :MacroNm ,V_LAST_RUN_DATE ,CURRENT_TIMESTAMP ); ); However, that didn't work, so I thought of this instead: REPLACE MACRO DbName.MyMacro ( MacroNm VARCHAR(50) ) AS ( /* Variable to store last time the macro was run */ CREATE VOLATILE TABLE MacroVars AS ( SELECT LastDate AS V_LAST_RUN_DATE FROM DbName.RunLog WHERE MacroNm = :MacroNm; ) WITH DATA ON COMMIT PRESERVE ROWS; /* Update the last run date to now and save the old date in history */ EXECUTE MACRO DbName.RunLogUpdater( :MacroNm ,SELECT V_LAST_RUN_DATE FROM MacroVars ,CURRENT_TIMESTAMP ); ); I can do what I'm looking for with a Stored Procedure, however I want to avoid for performance. Do you have any ideas about this? Is there anything else I can try? Cheers Tim

    Read the article

  • Connecting Error to Remote Oracle XE database using ASP.NET

    - by imsatasia
    Hello, I have installed Oracle XE on my Development machine and it is working fine. Then I installed Oracle XE client on my Test machine which is also working fine and I can access Development PC database from Browser. Now, I want to create an ASP.Net application which can access that Oracle XE database. I tried it too, but it always shows me an error on my TEST machine to connect database to the Development Machine using ASP.Net. Here is my code for ASP.Net application: protected void Page_Load(object sender, EventArgs e) { string connectionString = GetConnectionString(); OracleConnection connection = new OracleConnection(connectionString); connection.Open(); Label1.Text = "State: " + connection.State; Label1.Text = "ConnectionString: " + connection.ConnectionString; OracleCommand command = connection.CreateCommand(); string sql = "SELECT * FROM Users"; command.CommandText = sql; OracleDataReader reader = command.ExecuteReader(); while (reader.Read()) { string myField = (string)reader["nID"]; Console.WriteLine(myField); } } static private string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. return "User Id=System;Password=admin;Data Source=(DESCRIPTION=" + "(ADDRESS=(PROTOCOL=TCP)(HOST=myServerAddress)(PORT=1521))" + "(CONNECT_DATA=(SERVICE_NAME=)));"; }

    Read the article

  • vector::erase with pointer member

    - by matt
    I am manipulating vectors of objects defined as follow: class Hyp{ public: int x; int y; double wFactor; double hFactor; char shapeNum; double* visibleShape; int xmin, xmax, ymin, ymax; Hyp(int xx, int yy, double ww, double hh, char s): x(xx), y(yy), wFactor(ww), hFactor(hh), shapeNum(s) {visibleShape=0;shapeNum=-1;}; //Copy constructor necessary for support of vector::push_back() with visibleShape Hyp(const Hyp &other) { x = other.x; y = other.y; wFactor = other.wFactor; hFactor = other.hFactor; shapeNum = other.shapeNum; xmin = other.xmin; xmax = other.xmax; ymin = other.ymin; ymax = other.ymax; int visShapeSize = (xmax-xmin+1)*(ymax-ymin+1); visibleShape = new double[visShapeSize]; for (int ind=0; ind<visShapeSize; ind++) { visibleShape[ind] = other.visibleShape[ind]; } }; ~Hyp(){delete[] visibleShape;}; }; When I create a Hyp object, allocate/write memory to visibleShape and add the object to a vector with vector::push_back, everything works as expected: the data pointed by visibleShape is copied using the copy-constructor. But when I use vector::erase to remove a Hyp from the vector, the other elements are moved correctly EXCEPT the pointer members visibleShape that are now pointing to wrong addresses! How to avoid this problem? Am I missing something?

    Read the article

  • Resetting a PChar variable

    - by scott-thornton
    Hi, I don't know much about delphi win 32 programming, but I hope someone can answer my question. I get duplicate l_sGetUniqueIdBuffer saved into the database which I want to avoid. The l_sGetUniqueIdBuffer is actually different ( the value of l_sAuthorisationContent is xml, and I can see a different value generated by the call to getUniqueId) between rows. This problem is intermittant ( duplicates are rare...) There is only milliseconds difference between the update date between the rows. Given: ( unnesseary code cut out) l_sGetUniqueIdBuffer: PChar; FOutputBufferSize : integer; FOutputBufferSize := 1024; while( not dmAccomClaim.ADOQuClaimIdentification.Eof ) do begin // Get a unique id for the request l_sGetUniqueIdBuffer := AllocMem (FOutputBufferSize); l_returnCode := getUniqueId (m_APISessionId^, l_sGetUniqueIdBuffer, FOutputBufferSize); dmAccomClaim.ADOQuAddContent.Active := False; dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pContent').Value := (WideString(l_sAuthorisationContent)); dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pClaimId').Value := dmAccomClaim.ADOQuClaimIdentification.FieldByName('SB_CLAIM_ID').AsString; dmAccomClaim.ADOQuAddContent.Parameters.ParamByName('pUniqueId').Value := string(l_sGetUniqueIdBuffer); dmAccomClaim.ADOQuAddContent.ExecSQL; FreeMem( l_sAuthorisationContent, l_iAuthoriseContentSize ); FreeMem( l_sGetUniqueIdBuffer, FOutputBufferSize ); end; I guess i need to know, is the value in l_sGetUniqueIdBuffer being reset for every row??

    Read the article

  • How to override the default init.tcl

    - by Sean Murphy
    I'm working on a project where I want to make use of TCL as the command interpreter. I have a working c library object which I can load from within the tcl shell but my problem is finding a way to automatically do this while starting a tclsh. Essentially my ultimate goal is to be able to run a script and have it load my library and run some initial startup tcl code before dropping me back to the tclsh command prompt in interactive mode. e.g. tclsh -f myscript.tcl --then-switch-to-interactive or EXPORT TCLINIT=myscript.tcl tclsh The basic goal is to avoid having to distribute tclsh but rather rely in local user installations of tcl. All I would like to distribute is my library, a startup script and a shell command to launch the tclsh with the library preloaded. I've tried using the environment variables TCLINIT and TCL_LIBRARY but they seem to have no effect. The only workable solutions I've found so far are to add "source myscript.tcl" to either the end of /usr/share/tcltk/tcl8.5.init.tcl or ~/.tclshrc However both of these "solutions" are non perfect as they require modification of the default users workspace. It strikes me that there must be a way to handle this in TCL, but my research so far hasn't yielded anything. Does anyone have any suggestions?

    Read the article

  • How to mult-thread this?

    - by WilliamKF
    I wish to have two threads. The first thread1 occasionally calls the following pseudo function: void waitForThread2() { if (thread2 is not idle) { return; } notifyThread2IamReady(); while (thread2IsExclusive) { } } The second thread2 is forever in the following pseudo loop: for (;;) { Notify thread1 I am idle. while (!thread1IsReady()) { } Notify thread1 I am exclusive. Do some work while thread1 is blocked. Notify thread1 I am busy. Do some work in parallel with thread1. } What is the best way to write this such that both thread1 and thread2 are kept as busy as possible on a machine with multiple cores. I would like to avoid long delays between notification in one thread and detection by the other. I tried using pthread condition variables but found the delay between thread2 doing 'notify thread1 I am busy' and the loop in waitForThread2() on thear2IsExclusive() can be up to almost one second delay. I then tried using a volatile sig_atomic_t shared variable to control the same, but something is going wrong, so I must not be doing it correctly.

    Read the article

  • Codeigniter common templates

    - by Darthg8r
    Let's say that I have a website that has 100 different pages. Each page uses a common header and footer. Inside the header is some dynamic content that comes from a database. I'd like to avoid having to have code in every single controller and action that passes this common code into the view. function index() { // It sucks to have to include this on every controller action. data['title'] = "This is the index page"; data['currentUserName'] = "John Smith"; $this->load->view("main_view", data); } function comments() { // It sucks to have to include this on every controller action. data['title'] = "Comment list"; data['currentUserName'] = "John Smith"; $this->load->view("comment_view", data); } I realize that I could refactor the code so that the common parts are in a single function and the function is called by the action. Doing so would reduce SOME of the pain, but it still doesn't feel right since I'd still have to make a call to that function every time. What's the correct way to handle this?

    Read the article

  • Throttling CPU/Memory usage of a Thread in Java?

    - by Nalandial
    I'm writing an application that will have multiple threads running, and want to throttle the CPU/memory usage of those threads. There is a similar question for C++, but I want to try and avoid using C++ and JNI if possible. I realize this might not be possible using a higher level language, but I'm curious to see if anyone has any ideas. EDIT: Added a bounty; I'd like some really good, well thought out ideas on this. EDIT 2: The situation I need this for is executing other people's code on my server. Basically it is completely arbitrary code, with the only guarantee being that there will be a main method on the class file. Currently, multiple completely disparate classes, which are loaded in at runtime, are executing concurrently as separate threads. I inherited this code (the original author is gone). The way it's written, it would be a pain to refactor to create separate processes for each class that gets executed. If that's the only good way to limit memory usage via the VM arguments, then so be it. But I'd like to know if there's a way to do it with threads. Even as a separate process, I'd like to be able to somehow limit its CPU usage, since as I mentioned earlier, several of these will be executing at once. I don't want an infinite loop to hog up all the resources. EDIT 3: An easy way to approximate object size is with java's Instrumentation classes; specifically, the getObjectSize method. Note that there is some special setup needed to use this tool.

    Read the article

  • Algorithm(s) for rearranging simple symbolic algebraic expressions

    - by Gabe Johnson
    Hi, I would like to know if there is a straightforward algorithm for rearranging simple symbolic algebraic expressions. Ideally I would like to be able to rewrite any such expression with one variable alone on the left hand side. For example, given the input: m = (x + y) / 2 ... I would like to be able to ask about x in terms of m and y, or y in terms of x and m, and get these: x = 2*m - y y = 2*m - x Of course we've all done this algorithm on paper for years. But I was wondering if there was a name for it. It seems simple enough but if somebody has already cataloged the various "gotchas" it would make life easier. For my purposes I won't need it to handle quadratics. (And yes, CAS systems do this, and yes I know I could just use them as a library. I would like to avoid such a dependency in my application. I really would just like to know if there are named algorithms for approaching this problem.)

    Read the article

  • 3 tier application pattern suggestion

    - by Maxim Gershkovich
    I have attempted to make my first 3 tier application. In the process I have run into one problem I am yet to find an optimal solution for. Basically all my objects use an IFillable interface which forces the implementation of a sub as follows Public Sub Fill(ByVal Datareader As Data.IDataReader) Implements IFillable.Fill This sub then expects the Ids from the datareader will be identical to the properties of the object as such. Me.m_StockID = Datareader.GetGuid(Datareader.GetOrdinal("StockID")) In the end I end up with a datalayer that looks something like this. Public Shared Function GetStockByID(ByVal ConnectionString As String, ByVal StockID As Guid) As Stock Dim res As New Stock Using sqlConn As New SqlConnection(ConnectionString) sqlConn.Open() res.Fill(StockDataLayer.GetStockByIDQuery(sqlConn, StockID)) End Using Return res End Function Mostly this pattern seems to make sense. However my problem is, lets say I want to implement a property for Stock called StockBarcodeList. Under the above mentioned pattern any way I implement this property I will need to pass a connectionstring to it which obviously breaks my attempt at layer separation. Does anyone have any suggestions on how I might be able to solve this problem or am I going about this the completely wrong way? Does anyone have any suggestions on how I might improve my implementation? Please note however I am deliberately trying to avoid using the dataset in any form.

    Read the article

  • Really simple JSON serialization in .NET

    - by Evgeny
    I have some simple .NET objects I'd like to serialize to JSON and back again. The set of objects to be serialized is quite small and I control the implementation, so I don't need a generic solution that will work for everything. Since my assembly will be distributed as a library I'd really like to avoid a dependency on some third-party DLL: I just want to give users one assembly that they can reference. I've read the other questions I could find on converting to and from JSON in .NET. The recommended solution of JSON.NET does work, of course, but it requires distributing an extra DLL. I don't need any of the fancy features of JSON.NET. I just need to handle a simple object (or even dictionary) that contains strings, integers, DateTimes and arrays of strings and bytes. On deserializing I'm happy to get back a dictionary - it doesn't need to create the object again. Is there some really simple code out there that I could compile into my assembly to do this simple job? I've also tried System.Web.Script.Serialization.JavaScriptSerializer, but where it falls down is the byte array: I want to base64-encode it and even registering a converter doesn't let me easily accomplish that due to the way that API works (it doesn't pass in the name of the field).

    Read the article

  • Unit testing opaque structure based C API

    - by Nicolas Goy
    I have a library I wrote with API based on opaque structures. Using opaque structures has a lot of benefits and I am very happy with it. Now that my API are stable in term of specifications, I'd like to write a complete battery of unit test to ensure a solid base before releasing it. My concern is simple, how do you unit test API based on opaque structures where the main goal is to hide the internal logic? For example, let's take a very simple object, an array with a very simple test: WSArray a = WSArrayCreate(); int foo = 5; WSArrayAppendValue(a, &foo); int *bar = WSArrayGetValueAtIndex(a, 0); if(&foo != bar) printf("Eroneous value returned\n"); else printf("Good value returned\n"); WSRelease(a); Of course, this tests some facts, like the array actually acts as wanted with 1 value, but when I write unit tests, at least in C, I usualy compare the memory footprint of my datastructures with a known state. In my example, I don't know if some internal state of the array is broken. How would you handle that? I'd really like to avoid adding codes in the implementation files only for unit testings, I really emphasis loose coupling of modules, and injecting unit tests into the implementation would seem rather invasive to me. My first thought was to include the implementation file into my unit test, linking my unit test statically to my library. For example: #include <WS/WS.h> #include <WS/Collection/Array.c> static void TestArray(void) { WSArray a = WSArrayCreate(); /* Structure members are available because we included Array.c */ printf("%d\n", a->count); } Is that a good idea? Of course, the unit tests won't benefit from encapsulation, but they are here to ensure it's actually working.

    Read the article

  • php: replacing double <br /> with </p><p>

    - by andufo
    i use nicEdit to write RTF data in my CMS. The problem is that it generates strings like this: hello first line<br><br />this is a second line<br />this is a 3rd line since this is for a news site, i much prefer the final html to be like this: <p>hello first line</p><p>this is a second line<br />this is a 3rd line</p> so my current solution is this: i need to trim the $data for <br /> at the start/end of the string replace <br /><br /> with </p><p> (one single <br /> is allowed). finally, add <p> at the start and </p> at the end i only have the 3rd step so far. can someone give me a hand with steps 1 and 2? function replace_br($data) { # step 3 $data = '<p>'.$data.'</p>'; return $data; } thanks! ps: it would be even better to avoid specific situations. example: "hello<br /><br /><br /><br /><br />too much space" -- those 5 breaklines should also be converted to just one "</p><p>"

    Read the article

  • How to Elegantly convert switch+enum with polymorphism

    - by Kyle
    I'm trying to replace simple enums with type classes.. that is, one class derived from a base for each type. So for example instead of: enum E_BASE { EB_ALPHA, EB_BRAVO }; E_BASE message = someMessage(); switch (message) { case EB_ALPHA: applyAlpha(); case EB_BRAVO: applyBravo(); } I want to do this: Base* message = someMessage(); message->apply(this); // use polymorphism to determine what function to call. I have seen many ways to do this which all seem less elegant even then the basic switch statement. Using dyanimc_pointer_cast, inheriting from a messageHandler class that needs to be updated every time a new message is added, using a container of function pointers, all seem to defeat the purpose of making code easier to maintain by replacing switches with polymorphism. This is as close as I can get: (I use templates to avoid inheriting from an all-knowing handler interface) class Base { public: template<typename T> virtual void apply(T* sandbox) = 0; }; class Alpha : public Base { public: template<typename T> virtual void apply(T* sandbox) { sandbox->applyAlpha(); } }; class Bravo : public Base { public: template<typename T> virtual void apply(T* sandbox) { sandbox->applyBravo(); } }; class Sandbox { public: void run() { Base* alpha = new Alpha; Base* bravo = new Bravo; alpha->apply(this); bravo->apply(this); delete alpha; delete bravo; } void applyAlpha() { // cout << "Applying alpha\n"; } void applyBravo() { // cout << "Applying bravo\n"; } }; Obviously, this doesn't compile but I'm hoping it gets my problem accross.

    Read the article

  • Date Filtered Collections without Functions

    - by madcapnmckay
    Hi, I have an entity similar to the below: public class Entity { public List<DateItem> PastDates { get; set; } public List<DateItem> FutureDates { get; set; } } public class DateItem { public DateTime Date { get; set; } /* * Other Properties * */ } Where PastDates and FutureDates are both mapped to the same type/table. I have been trying to find a way to have the Past and Future properties mapped automagically by Nhibernate. The closest I came was where clause on the mapping as follows HasMany(x => x.PastDates) .AsBag().Cascade .AllDeleteOrphan() .KeyColumnNames.Add("EventId").Where("Date < currentdate()") .Inverse(); Where currentdate is a UDF. I do not want to have these database specific functions if I can avoid it, mostly because i can't then test my DAL with SQLite as it doesn't support functions or stored procedures. At the moment I am building the past and future collections using Criteria and adding to my DTO manually. Anyone know how this could be achieved without using any UDFs? Many thanks,

    Read the article

  • .Net xsd.exe tool doesn't generate all types

    - by Mrchief
    For some reason, MS .Net (v3.5) tool - xsd.exe doesn't generate types when they are not used inside any element. e.g. XSD File (I threw in the complex element to avoid this warning - "Warning: cannot generate classes because no top-level elements with complex type were found."): <?xml version="1.0" encoding="utf-8"?> <xs:schema targetNamespace="http://tempuri.org/XMLSchema.xsd" elementFormDefault="qualified" xmlns="http://tempuri.org/XMLSchema.xsd" xmlns:mstns="http://tempuri.org/XMLSchema.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" > <xs:simpleType name="EnumTest"> <xs:restriction base="xs:string"> <xs:enumeration value="item1" /> <xs:enumeration value="item2" /> <xs:enumeration value="item3" /> </xs:restriction> </xs:simpleType> <xs:complexType name="myComplexType"> <xs:attribute name="Name" use="required" type="xs:string"/> </xs:complexType> <xs:element name="myElem" type="myComplexType"></xs:element> </xs:schema> When i run this thru xsd.exe using xsd /c xsdfile.xsd I don't see EnumTest in the generated cs file. Note; Even though I don't use the enum here, but in my actual project, I have cases like this where we send enum's string value as output. How can I force the xsd tool to include these? Or should I switch to some other tool? I work in Visual Studio 2008.

    Read the article

  • Directly call distutils' or setuptools' setup() function with command name/options, without parsing

    - by Ryan B. Lynch
    I'd like to call Python's distutils' or setuptools' setup() function in a slightly unconventional way, but I'm not sure whether distutils is meant for this kind of usage. As an example, let's say I currently have a 'setup.py' file, which looks like this (lifted verbatim from the distutils docs--the setuptools usage is almost identical): from distutils.core import setup setup(name='Distutils', version='1.0', description='Python Distribution Utilities', author='Greg Ward', author_email='[email protected]', url='http://www.python.org/sigs/distutils-sig/', packages=['distutils', 'distutils.command'], ) Normally, to build just the .spec file for an RPM of this module, I could run python setup.py bdist_rpm --spec-only, which parses the command line and calls the 'bdist_rpm' code to handle the RPM-specific stuff. The .spec file ends up in './dist'. How can I change my setup() invocation so that it runs the 'bdist_rpm' command with the '--spec-only' option, WITHOUT parsing command-line parameters? Can I pass the command name and options as parameters to setup()? Or can I manually construct a command line, and pass that as a parameter, instead? NOTE: I already know that I could call the script in a separate process, with an actual command line, using os.system() or the subprocess module or something similar. I'm trying to avoid using any kind of external command invocations. I'm looking specifically for a solution that runs setup() in the current interpreter. For background, I'm converting some release-management shell scripts into a single Python program. One of the tasks is running 'setup.py' to generate a .spec file for further pre-release testing. Running 'setup.py' as an external command, with its own command line options, seems like an awkward method, and it complicates the rest of the program. I feel like there may be a more Pythonic way.

    Read the article

  • Java assignment issues - Is this atomic?

    - by Bob
    Hi, I've got some questions about Java's assigment. Strings I've got a class: public class Test { private String s; public synchronized void setS(String str){ s = s + " - " + str; } public String getS(){ return s; } } I'm using "synchronized" in my setter, and avoiding it in my getter, because in my app, there are a tons of data gettings, and very few settings. Settings must be synchronized to avoid inconsistency. My question is: is getting and setting a variable atomic? I mean, in a multithreaded environment, Thread1 is about to set variable s, while Thread2 is about to get "s". Is there any way the getter method could get something different than the s's old value or the s's new value (suppose we've got only two threads)? In my app it is not a problem to get the new value, and it is not a problem to get the old one. But could I get something else? What about HashMap's getting and putting? considering this: public class Test { private Map<Integer, String> map = Collections.synchronizedMap(new HashMap<Integer, String>()); public synchronized void setMapElement(Integer key, String value){ map.put(key, value); } public String getValue(Integer key){ return map.get(key); } } Is putting and getting atomic? How does HashMap handle putting an element into it? Does it first remove the old value and put the now one? Could I get other than the old value or the new value? Thanks in advance!

    Read the article

  • What are the best workarounds for known problems with Hibernate's schema validation of floating poin

    - by Jason Novak
    I have several Java classes with double fields that I am persisting via Hibernate. For example, I have @Entity public class Node ... private double value; When Hibernate's org.hibernate.dialect.Oracle10gDialect creates the DDL for the Node table, it maps the value field to a "double precision" type. create table MDB.Node (... value double precision not null, ... It would appear that in Oracle, "double precision" is an alias for "float". So, when I try to verify the database schema using the org.hibernate.cfg.AnnotationConfiguration.validateSchema() method, Oracle appears to describe the value column as a "float". This causes Hibernate to throw the following Exception org.hibernate.HibernateException: Wrong column type in DBO.ACL_RULE for column value. Found: float, expected: double precision A very similar problem is listed in Hibernate's JIRA database as HHH-1961 (http://opensource.atlassian.com/projects/hibernate/browse/HHH-1961). I'd like to avoid doing anything that will break MySql, Postgres, and Sql Server support so extending the Oracle10gDialect appears to be the most promising of the workarounds mentioned in HHH-1961. But extending a Dialect is something I've never done before and I'm afraid there may be some nasty gotchas. What is the best workaround for this problem that won't break our compatibility with MySql, Postgres, and Sql Server? Thanks for taking the time to look at this!

    Read the article

  • standard way to perform a clean shutdown with Boost.Asio

    - by Timothy003
    I'm writing a cross-platform server program in C++ using Boost.Asio. Following the HTTP Server example on this page, I'd like to handle a user termination request without using implementation-specific APIs. I've initially attempted to use the standard C signal library, but have been unable to find a design pattern suitable for Asio. The Windows example's design seems to resemble the signal library closest, but there's a race condition where the console ctrl handler could be called after the server object has been destroyed. I'm trying to avoid race conditions that cause undefined behavior as specified by the C++ standard. Is there a standard (correct) way to stop the server? So far: #include <csignal> #include <functional> #include <boost/asio.hpp> using std::signal; using boost::asio::io_service; extern "C" { static void handle_signal(int); } namespace { std::function<void ()> sighandler; } void handle_signal(int) { sighandler(); } int main() { io_service s; sighandler = std::bind(&io_service::stop, &s); auto res = signal(SIGINT, &handle_signal); // race condition? SIGINT raised before I could set ignore back if (res == SIG_IGN) signal(SIGINT, SIG_IGN); res = signal(SIGTERM, &handle_signal); // race condition? SIGTERM raised before I could set ignore back if (res == SIG_IGN) signal(SIGTERM, SIG_IGN); s.run(); // reset signals signal(SIGTERM, SIG_DFL); signal(SIGINT, SIG_DFL); // is it defined whether handle_signal can still be in execution at this // point? sighandler = nullptr; }

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >