Search Results

Search found 14166 results on 567 pages for 'uses'.

Page 66/567 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • sqlalchemy date type in 0.6 migration using mssql

    - by nosklo
    I'm connection to mssql server through pyodbc, via FreeTDS odbc driver, on linux ubuntu 10.04. Sqlalchemy 0.5 uses DATETIME for sqlalchemy.Date() fields. Now Sqlalchemy 0.6 uses DATE, but sql server 2000 doesn't have a DATE type. How can I make DATETIME be the default for sqlalchemy.Date() on sqlalchemy 0.6 mssql+pyodbc dialect? I'd like to keep it as clean as possible. Here's code to reproduce the issue: import sqlalchemy from sqlalchemy import Table, Column, MetaData, Date, Integer, create_engine engine = create_engine( 'mssql+pyodbc://sa:sa@myserver/mydb?driver=FreeTDS') m = MetaData(bind=engine) tb = sqlalchemy.Table('test_date', m, Column('id', Integer, primary_key=True), Column('dt', Date()) ) tb.create() And here is the traceback I'm getting: Traceback (most recent call last): File "/tmp/teste.py", line 15, in <module> tb.create() File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/schema.py", line 428, in create bind.create(self, checkfirst=checkfirst) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1647, in create connection=connection, **kwargs) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1682, in _run_visitor **kwargs).traverse_single(element) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/sql/visitors.py", line 77, in traverse_single return meth(obj, **kw) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/ddl.py", line 58, in visit_table self.connection.execute(schema.CreateTable(table)) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1157, in execute params) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1210, in _execute_ddl return self.__execute_context(context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1268, in __execute_context context.parameters[0], context=context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1367, in _cursor_execute context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1360, in _cursor_execute context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/default.py", line 277, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('42000', '[42000] [FreeTDS][SQL Server]Column or parameter #2: Cannot find data type DATE. (2715) (SQLExecDirectW)') '\nCREATE TABLE test_date (\n\tid INTEGER NOT NULL IDENTITY(1,1), \n\tdt DATE NULL, \n\tPRIMARY KEY (id)\n)\n\n' ()

    Read the article

  • Encrypting using RSA via COM Interop = "The requested operation requires delegation to be enabled on

    - by Mr AH
    Hi Guys, So i've got this little static method in a .Net class which takes a string, uses some stored public key and returns the encrypted version of that key. This is basically so some user entered data can be saved an encrypted, then retrieved and decrypted at a later date. Pretty basic stuff and the unit test works fine. However, part of the application is in classic ASP. This then uses some COM visible version of the class to go off and invoke the method on the real class and return the same string to the COM client (classic ASP). I use this kind of stuff all the time, but in this case we have a major problem. As the method is doing something with RSA keys and has to access certain machine information to do so, we get the error: "The requested operation requires delegation to be enabled on the machine. I've searched around a lot, but can't really understand what this means. I assume I am getting this error on the COM but not the UT because the UT runs as me (Administrator) and classic ASP as IWAM. Anyone know what I need to do to enable IWAM to do this? Or indeed if this is the real problem here?

    Read the article

  • Rails.cache throws "marshal dump" error when changed from memory store to memcached store

    - by gsmendoza
    If I set this in my environment config.action_controller.cache_store = :mem_cache_store ActionController::Base.cache_store will use a memcached store but Rails.cache will use a memory store instead: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb6eb4bbc @data=<MemCache: 1 servers, ns: nil, ro: false>> >> Rails.cache => #<ActiveSupport::Cache::MemoryStore:0xb78b5e54 @data={}> In my app, I use Rails.cache.fetch(key){ object } to cache objects inside my helpers. All this time, I assumed that Rails.cache uses the memcached store so I'm surprised that it uses memory store. If I change the cache_store setting in my environment to config.cache_store = :mem_cache_store both ActionController::Base.cache_store and Rails.cache will now use the same memory store, which is what I expect: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> >> Rails.cache => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> However, when I run the app, I get a "marshal dump" error in the line where I call Rails.cache.fetch(key){ object } no marshal_dump is defined for class Proc Extracted source (around line #1): 1: Rails.cache.fetch(fragment_cache_key(...), :expires_in => 15.minutes) { ... } vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'dump' vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'set_without_newrelic_trace' What gives? Is Rails.cache meant to be a memory store? Should I call controller.cache_store.fetch in the places where I call Rails.cache.fetch?

    Read the article

  • How to refactor this Javascript anonymous function?

    - by HeavyWave
    We have this anonymous function in our code, which is part of the jQuery's Ajax object parameters and which uses some variables from the function it is called from. this.invoke = function(method, data, callback, error, bare) { $.ajax({ success: function(res) { if (!callback) return; var result = ""; if (res != null && res.length != 0) var result = JSON2.parse(res); if (bare) { callback(result); return; } for (var property in result) { callback(result[property]); break; } } }); } I have omitted the extra code, but you get the idea. The code works perfectly fine, but it leaks 4 Kbs on each call in IE, so I want to refactor it to turn the anonymous function into a named one, like this.onSuccess = function(res) { .. }. The problem is that this function uses variables from this.invoke(..), so I cannot just take it outside of its body. How do I correctly refactor this code, so that it does not use anonymous functions and parent function variables?

    Read the article

  • Silverlight: AutoCompleteBox and TextWrapping

    - by Sven Sönnichsen
    How to enable TextWrapping in the AutoCompleteBox control of the SilverlightToolkit (November 2009)? There is no property to set the wrapping mode. So is there any workaround? Sven Here are more infos about my current problem: To me the AutoCompleteBox consists of a list which displays all possible values and a TextBox where I enter a search string and display a selected value. I want now, that the selected value in the TextBox wraps. So here is my current XAML, which uses the AutoCompleteBox in a DataGrid: <data:DataGrid x:Name="GrdComponents" ItemsSource="{Binding Path=Components}" AutoGenerateColumns="false" Margin="4" VerticalAlignment="Stretch" VerticalContentAlignment="Stretch" HorizontalScrollBarVisibility="Visible"> <data:DataGrid.Columns> <data:DataGridTemplateColumn Header="Component" Width="230"> <data:DataGridTemplateColumn.CellEditingTemplate > <DataTemplate> <input:AutoCompleteBox Text="{Binding Component.DataSource, Mode=TwoWay, ValidatesOnExceptions=True, NotifyOnValidationError=True}" Loaded="AcMaterials_Loaded" x:Name="Component" SelectionChanged="AcMaterial_SelectionChanged" IsEnabled="{Binding Component.IsReadOnly, Mode=OneWay, Converter={StaticResource ReadOnlyConverter}}" BindingValidationError="TextBox_BindingValidationError" ToolTipService.ToolTip="{Binding Component.Description}" IsTextCompletionEnabled="False" FilterMode="Contains" MinimumPopulateDelay="1" MinimumPrefixLength="3" ValueMemberPath="Description"> <input:AutoCompleteBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding DescriptionTypeNumber}"/> </DataTemplate> </input:AutoCompleteBox.ItemTemplate> </input:AutoCompleteBox> </DataTemplate> </data:DataGridTemplateColumn.CellEditingTemplate> </data:DataGridTemplateColumn> </data:DataGrid.Columns> </data:DataGrid> The AutoCompleteBox uses different values for the list (DescriptionTypeNumer) and for the selected value (Description).

    Read the article

  • Confused with an ASP.NET/WCF WSDL Parsing Error

    - by Vaccano
    I have a WCF Web Service that my ASP.NET app uses. It has been working fine for quite some time. I just added in a Dev Express Grid (and the Dev Express DLLs) and a new page that uses them and now I am getting parsing errors on the WSDL. But the weird part is that it works fine on my machine but fails on the web server machine. (Both are connecting to the same web services WSDL.) Here is the error message I am getting: Server Error in '/MyWebAppWebDev' Application. -------------------------------------------------------------------------------- Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Reference.svcmap: Failed to generate code for the service reference 'MyWebAppService'. Cannot import wsdl:portType Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter Error: Referenced type 'WebClientApp.MyWebAppService.ReferenceUpdatesDataContract, WebClientApp, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' with data contract name 'ReferenceUpdatesDataContract' in namespace 'http://schemas.datacontract.org/2004/07/MyWebAppServiceLibrary.DataContracts' cannot be used since it does not match imported DataContract. Need to exclude this type from referenced types. XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:portType[@name='IMyWebAppReferenceDataServiceLib'] Cannot import wsdl:binding Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on. XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:portType[@name='IMyWebAppReferenceDataServiceLib'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:binding[@name='MyWebAppServicesDefaultEndpoint'] Cannot import wsdl:port Detail: There was an error importing a wsdl:binding that the wsdl:port is dependent on. XPath to wsdl:binding: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:binding[@name='MyWebAppServicesDefaultEndpoint'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:service[@name='MyWebAppReferenceDataServiceLib']/wsdl:port[@name='MyWebAppServicesDefaultEndpoint'] Source Error: [No relevant source lines] Source File: /MyWebAppWebDev/App_WebReferences/MyWebAppService/ Line: 1 I am completely stumped on this. I have checked my web.config endpoint address and it is spot on (and notably is not in the error message above). Any ideas would be welcomed.

    Read the article

  • Can xjc -version be trusted?

    - by JasonPlutext
    I've spent the day debugging an issue with JAXB getting namespaces wrong or missing (possibly related to Marshaller.JAXB_FRAGMENT, but that's not the point here). I found the problem occurs with JAXB RI 2.1.10 in my endorsed dir. It is fixed if I use JAXB RI 2.2.4 or 2.2.6 Here is what is really confusing (and what made it take so long). The problem occurs on Linux with: $ java -version java version "1.7.0_03" OpenJDK Runtime Environment (IcedTea7 2.1.1pre) (7~u3-2.1.1~pre1-1ubuntu2) OpenJDK 64-Bit Server VM (build 22.0-b10, mixed mode) $ xjc -version xjc 2.2.4 but it should work fine, if this java really uses JAXB RI 2.2.4 !! Similarly, I can't reproduce the issue on Windows with Java 1.6.0_27, which reports: C:\Program Files\Java\jdk1.6.0_27\bin>java -version java version "1.6.0_27" Java(TM) SE Runtime Environment (build 1.6.0_27-b07) Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode) C:\Program Files\Java\jdk1.6.0_27\bin>xjc -version xjc version "JAXB 2.1.10 in JDK 6" JavaTM Architecture for XML Binding(JAXB) Reference Implementation, (build JAXB 2.1.10 in JDK 6) and yet if I put 2.1.10 RI in my endorsed dir, the problem occurs. It should occur with 1.6.0_27, if that really uses JAXB RI 2.1.10. It seems to me that the problem I'm experiencing has been fixed in the reference implementation somewhere after 2.1.10 and before 2.2.4, but that neither of the 2 VM's above actually use the JAXB version they claim to. Or possibly they use the xjc they claim, but not what is in jaxb-api.jar and jaxb-impl.jar (I know there is a difference in the namespace prefix mapper property, but that won't be causing this problem). I've done these experiments on Win 7 and Ubuntu, in tomcat (no eclipse), and in eclipse (no tomcat), so I'm pretty confident I'm explaining my findings correctly. Can anyone provide any insight into what is happening? If I'm right, does anyone know what versions of JAXB the various Sun/Oracle JDKs really use?

    Read the article

  • Rails - difference between config.cache_store and config.action_controller.cache_store?

    - by gsmendoza
    If I set this in my environment config.action_controller.cache_store = :mem_cache_store ActionController::Base.cache_store will use a memcached store but Rails.cache will use a memory store instead: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb6eb4bbc @data=<MemCache: 1 servers, ns: nil, ro: false>> >> Rails.cache => #<ActiveSupport::Cache::MemoryStore:0xb78b5e54 @data={}> In my app, I use Rails.cache.fetch(key){ object } to cache objects inside my helpers. All this time, I assumed that Rails.cache uses the memcached store so I'm surprised that it uses memory store. If I change the cache_store setting in my environment to config.cache_store = :mem_cache_store both ActionController::Base.cache_store and Rails.cache will now use the same memory store, which is what I expect: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> >> Rails.cache => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> However, when I run the app, I get a "marshal dump" error in the line where I call Rails.cache.fetch(key){ object } no marshal_dump is defined for class Proc Extracted source (around line #1): 1: Rails.cache.fetch(fragment_cache_key(...), :expires_in => 15.minutes) { ... } vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'dump' vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'set_without_newrelic_trace' What gives? Is Rails.cache meant to be a memory store? Should I call controller.cache_store.fetch in the places where I call Rails.cache.fetch?

    Read the article

  • Why is TransactionScope operation is not valid?

    - by Cragly
    I have a routine which uses a recursive loop to insert items into a SQL Server 2005 database The first call which initiates the loop is enclosed within a transaction using TransactionScope. When I first call ProcessItem the myItem data gets inserted into the database as expected. However when ProcessItem is called from either ProcessItemLinks or ProcessItemComments I get the following error. “The operation is not valid for the state of the transaction” I am running this in debug with VS 2008 on Windows 7 and have the MSDTC running to enable distributed transactions. The code below isn’t my production code but is set out exactly the same. The AddItemToDatabase is a method on a class I cannot modify and uses a standard ExecuteNonQuery() which creates a connection then closes and disposes once completed. I have looked at other posting on here and the internet and still cannot resolve this issue. Any help would be much appreciated. using (TransactionScope processItem = new TransactionScope()) { foreach (Item myItem in itemsList) { ProcessItem(myItem); } processItem.Complete(); } private void ProcessItem(Item myItem) { AddItemToDatabase(myItem); ProcessItemLinks(myItem); ProcessItemComments(myItem); } private void ProcessItemLinks(Item myItem) { foreach (Item link in myItem.Links) { ProcessItem(link); } } private void ProcessItemComments(Item myItem) { foreach (Item comment in myItem.Comments) { ProcessItem(comment); } } Here is top part of the stack trace. Unfortunatly I cant show the build up to this point as its company sensative information which I can not disclose. Hope this is enough information. at System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) at System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open()

    Read the article

  • Use UdpClient with IPv4 and IPv6?

    - by mazzzzz
    A little while ago I created a class to deal with my LAN networking programs. I recently upgraded one of my laptops to windows 7 and relized that windows 7 (or at least the way I have it set up) only supports IPv6, but my desktop is still back in the Windows xp days, and only uses IPv4. The class I created uses the UdpClient class, and is currently setup to only work with IPv4.. Is there a way to modify my code to allow sending and receiving of IPv6 and IPv4 packets?? It would be hard to scrap the classes code, a lot of my programs rely on this class. I would like to keep the class as close to its original state, so I don't need to modify my older programs, only switch out the old class for the updated one. Thanks for any and all help, Max Send: using System.Net.Sockets;UdpClient tub = new UdpClient (); tub.Connect ( new IPEndPoint ( ToIP, ToPort ) ); UdpState s = new UdpState (); s.client = tub; s.endpoint = new IPEndPoint ( ToIP, ToPort ); tub.BeginSend ( data, data.Length, new AsyncCallback ( SendCallBack ),s); private void SendCallBack ( IAsyncResult result ) { UdpClient client = (UdpClient)( (UdpState)( result.AsyncState ) ).client; IPEndPoint endpoint = (IPEndPoint)( (UdpState)( result.AsyncState ) ).endpoint; client.EndSend ( result ); } Receive: UdpClient tub = new UdpClient (ReceivePort); UdpState s = new UdpState (); s.client = tub; s.endpoint = new IPEndPoint ( ReceiveIP, ReceivePort ); s.callback = cb; tub.BeginReceive ( new AsyncCallback ( receivedPacket ), s ); public void receivedPacket (IAsyncResult result) { UdpClient client = (UdpClient)( (UdpState)( result.AsyncState ) ).client; IPEndPoint endpoint = (IPEndPoint)( (UdpState)( result.AsyncState ) ).endpoint; Byte[] receiveBytes = client.EndReceive ( result, ref endpoint ); ReceivedPacket = new Packet ( receiveBytes ); client.Close(); //Do what ever with the packets now }

    Read the article

  • Compiling a DLL which includes Ogre3D gives an assertion error when used

    - by samaursa
    Hi, I have a framework that I am building and is being compiled into a static library to be used by other projects. The library works perfectly without issues. The problem is that the link time is very long for the projects that use the library so I thought I will make a DLL project of the same framework. I started with baby steps and created an MFC DLL project through visual studio. The project has the following header: /// -------------------------------------------- #ifndef OGRECORE_H #define OGRECORE_H #ifdef OGREFW_EXPORT #define OGREFW_DLL __declspec(dllexport) #else #define OGREFW_DLL __declspec(dllimport) #endif class OgreRoot; namespace OgreFW { class OGREFW_DLL OgreCore// : public OIS::KeyListener, public OIS::MouseListener { public: OgreCore(); ~OgreCore(); }; }; #endif // OGRECORE_H and this is the source #include "stdafx.h" #include "OgreCore.h" //#include "Ogre.h" //#include "OgreRoot.h" //#include "OgreRenderWindow.h" //#include "OgreLog.h" //#include "OgreLogManager.h" //#include "OgreOverlay.h" //#include "OgreViewport.h" //#include "OgreRenderWindow.h" //#include "OgreFrameListener.h" //#include "OgreWindowEventUtilities.h" //#include "OgreSceneNode.h" //#include "OgreEntity.h" //#include "OgreManualObject.h" //#include "OgreMeshManager.h" //#include "OgreConfigFile.h" //#include "OgreOverlayContainer.h" //#include "OgreOverlayManager.h" namespace OgreFW { OGREFW_DLL OgreCore::OgreCore() { } // ------------------------ OGREFW_DLL OgreCore::~OgreCore() { } } As you can see I have commented out Ogre includes. When a project uses the compiled DLL and constructs this (OgreCore) class, it works perfectly fine. As soon as uncomment one of the Ogre includes and compile the DLL again, the project that uses the DLL now gives an assertion error. The full details can be found here in the Ogre forum post. I posted the question there first but since its not really an Ogre specific question I thought I will try here as well. The link to the Ogre post is: http://www.ogre3d.org/forums/viewtopic.php?f=2&t=58403 Thank you in advance

    Read the article

  • branch prediction

    - by Alexander
    Consider the following sequence of actual outcomes for a single static branch. T means the branch is taken. N means the branch is not taken. For this question, assume that this is the only branch in the program. T T T N T N T T T N T N T T T N T N Assume a two-level branch predictor that uses one bit of branch history—i.e., a one-bit BHR. Since there is only one branch in the program, it does not matter how the BHR is concatenated with the branch PC to index the BHT. Assume that the BHT uses one-bit counters and that, again, all entries are initialized to N. Which of the branches in this sequence would be mis-predicted? Use the table below. Now I am not asking answers to this question, rather than guides and pointers on this. What does a two level branch predictor means and how does it works? What does the BHR and BHT stands for?

    Read the article

  • Production settings file for log4j?

    - by James
    Here is my current log4j settings file. Are these settings ideal for production use or is there something I should remove/tweak or change? I ask because I was getting all my threads being hung due to log4j blocking. I checked my open file descriptors I was only using 113. # ***** Set root logger level to WARN and its two appenders to stdout and R. log4j.rootLogger=warn, stdout, R # ***** stdout is set to be a ConsoleAppender. log4j.appender.stdout=org.apache.log4j.ConsoleAppender # ***** stdout uses PatternLayout. log4j.appender.stdout.layout=org.apache.log4j.PatternLayout # ***** Pattern to output the caller's file name and line number. log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n # ***** R is set to be a RollingFileAppender. log4j.appender.R=org.apache.log4j.RollingFileAppender log4j.appender.R.File=logs/myapp.log # ***** Max file size is set to 100KB log4j.appender.R.MaxFileSize=102400KB # ***** Keep one backup file log4j.appender.R.MaxBackupIndex=5 # ***** R uses PatternLayout. log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%p %t %d %c - %m%n #set httpclient debug levels log4j.logger.org.apache.component=ERROR,stdout log4j.logger.httpclient.wire=ERROR,stdout log4j.logger.org.apache.commons.httpclient=ERROR,stdout log4j.logger.org.apache.http.client.protocol=ERROR,stdout UPDATE*** Adding thread dump sample from all my threads (100) "pool-1-thread-5" - Thread t@25 java.lang.Thread.State: BLOCKED on org.apache.log4j.spi.RootLogger@1d45a585 owned by: pool-1-thread-35 at org.apache.log4j.Category.callAppenders(Category.java:201) at org.apache.log4j.Category.forcedLog(Category.java:388) at org.apache.log4j.Category.error(Category.java:302)

    Read the article

  • How does lucene index documents?

    - by Mehdi Amrollahi
    Hello, I read some document about Lucene; also I read the document in this link (http://lucene.sourceforge.net/talks/pisa). I don't really understand how Lucene indexes documents and don't understand which algorithms Lucene uses for indexing? On the above link, it says Lucene uses this algorithm for indexing: incremental algorithm: maintain a stack of segment indices create index for each incoming document push new indexes onto the stack let b=10 be the merge factor; M=8 for (size = 1; size < M; size *= b) { if (there are b indexes with size docs on top of the stack) { pop them off the stack; merge them into a single index; push the merged index onto the stack; } else { break; } } How does this algorithm provide optimized indexing? Does Lucene use B-tree algorithm or any other algorithm like that for indexing - or does it have a particular algorithm? Thank you for reading my post.

    Read the article

  • Why doesn't my android application show up in the launcher?

    - by rushinge
    I'm developing an application for the Android platform targeted for api level 4 (Android 1.6) but I can't get it to show up on my phone and I can't figure out why. Here's my AndroidManifest.xml is there a problem in here? Or is there something else I should be looking at? <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.sbe.app.hellocogen" android:versionCode="1" android:versionName="1.0"> <uses-permission android:name="android.permission.INTERNET" /> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".activity.ListPlants" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".activity.AddPlant" android:label="Add Plant"> <intent-filter> <action android:name="android.intent.action.VIEW"/> <category android:name="android.intent.category.DEFAULT"/> </intent-filter> </activity> <activity android:name=".activity.UnitActivity" android:label="IP HERE, PLANT NAME"> <intent-filter> <action android:name="android.intent.action.VIEW"/> <category android:name="android.intent.category.DEFAULT"/> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="4"/> </manifest> When I started this application it didn't show up but I fixed it by setting the minimum api level to 4 instead of 7 then it started showing up but now it stopped showing up again and I don't know why.

    Read the article

  • Relation between HTTP Keep Alive duration and TCP timeout duration

    - by Suresh Kumar
    I am trying to understand the relation between TCP/IP and HTTP timeout values. Are these two timeout values different or same? Most Web servers allow users to set the HTTP Keep Alive timeout value through some configuration. How is this value used by the Web servers? is this value just set on the underlying TCP/IP socket i.e is the HTTP Keep Alive timeout and TCP/IP Keep Alive Timeout same? or are they treated differently? My understanding is (maybe incorrect): The Web server uses the default timeout on the underlying TCP socket (i.e. indefinite) regardless of the configured HTTP Keep Alive timeout and creates a Worker thread that counts down the specified HTTP timeout interval. When the Worker thread hits zero, it closes the connection. EDIT: My question is about the relation or difference between the two timeout durations i.e. what will happen when HTTP keep-alive timeout duration and the timeout on the Socket (SO_TIMEOUT) which the Web server uses is different? should I even worry about these two being same or not?

    Read the article

  • IE Security Warning with widgets

    - by superexsl
    Hey I'm creating an ASP.NET application which uses Facebook Connect and fbml tags. It also uses the LinkedIn widget. When I run this app in any browser, there are no warnings and everything works. However, in IE, a message like this comes up: Security Warning: The current webpage is trying to open a site in your Trusted sites list. Do you want to allow this? Current site:http://www.facebook.com Trusted site:http://localhost (same for LinkedIn.com). I know how to fix this from a client perspective and to stop the security warning showing up. However, is it possible to ensure this message doesn't come up as it could be off putting for users who don't know how to suppress this warning? I haven't tried uploading it to my webhost, so not sure if this message will appear for everyone in production. However, I always get it on my local machine. (None of my pages use SSL, so I don't think that's the issue. I tried using FB's HTTPS urls but that didn't make a difference). Thanks

    Read the article

  • How can I return json from my WCF rest service (.NET 4), using Json.Net, without it being a string,

    - by Samuel Meacham
    The DataContractJsonSerializer is unable to handle many scenarios that Json.Net handles just fine when properly configured (specifically, cycles). A service method can either return a specific object type (in this case a DTO), in which case the DataContractJsonSerializer will be used, or I can have the method return a string, and do the serialization myself with Json.Net. The problem is that when I return a json string as opposed to an object, the json that is sent to the client is wrapped in quotes. Using DataContractJsonSerializer, returning a specific object type, the response is: {"Message":"Hello World"} Using Json.Net to return a json string, the response is: "{\"Message\":\"Hello World\"}" I do not want to have to eval() or JSON.parse() the result on the client, which is what I would have to do if the json comes back as a string, wrapped in quotes. I realize that the behavior is correct; it's just not what I want/need. I need the raw json; the behavior when the service method's return type is an object, not a string. So, how can I have my method return an object type, but not use the DataContractJsonSerializer? How can I tell it to use the Json.Net serializer instead? Or, is there someway to directly write to the response stream? So I can just return the raw json myself? Without the wrapping quotes? Here is my contrived example, for reference: [DataContract] public class SimpleMessage { [DataMember] public string Message { get; set; } } [ServiceContract] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)] public class PersonService { // uses DataContractJsonSerializer // returns {"Message":"Hello World"} [WebGet(UriTemplate = "helloObject")] public SimpleMessage SayHelloObject() { return new SimpleMessage("Hello World"); } // uses Json.Net serialization, to return a json string // returns "{\"Message\":\"Hello World\"}" [WebGet(UriTemplate = "helloString")] public string SayHelloString() { SimpleMessage message = new SimpleMessage() { Message = "Hello World" }; string json = JsonConvert.Serialize(message); return json; } // I need a mix of the two. Return an object type, but use the Json.Net serializer. }

    Read the article

  • Android HTTPClient not working inspite of giving permissions in manifest file.

    - by primal
    Hi, I was trying http-cleint tutorials from svn.apache.org. While running the application I am getting the following error in console. [2010-04-30 09:26:36 - HalloAndroid] ActivityManager: java.lang.SecurityException: Permission Denial: starting Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.org.example/.HalloAndroid } from null (pid=-1, uid=-1) requires android.permission.INTERNET I have added android.permission.INTERNET in AndroidManifest.xml. <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.org.example" android:versionCode="1" android:versionName="1.0"> <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".HalloAndroid" android:label="@string/app_name" android:permission="android.permission.INTERNET"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.INTERNET"></uses-permission> </manifest> The java code in HalloAndroid.java is as follows HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget2 = new HttpGet("http://google.com/"); HttpResponse response2 = null; try { response2 = httpclient.execute(httpget2); } catch (ClientProtocolException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (IOException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } HttpEntity entity = response2.getEntity(); if (entity != null) { long len = entity.getContentLength(); if (len != -1 && len < 2048) { try { Log.d(TAG, EntityUtils.toString(entity)); } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } else { // Stream content out } Any help is much appreciated.

    Read the article

  • PHP PCRE differences on testing and hosting servers

    - by Gary Pearman
    Hi all, I've got the following regular expression that works fine on my testing server, but just returns an empty string on my hosted server. $text = preg_replace('~[^\\pL\d]+~u', $use, $text); Now I'm pretty sure this comes down to the hosting server version of PCRE not being compiled with Unicode property support enabled. The differences in the two versions are as follows: My server: PCRE version 7.8 2008-09-05 Compiled with UTF-8 support Unicode properties support Newline sequence is LF \R matches all Unicode newlines Internal link size = 2 POSIX malloc threshold = 10 Default match limit = 10000000 Default recursion depth limit = 10000000 Match recursion uses stack Hosting server: PCRE version 4.5 01-December-2003 Compiled with UTF-8 support Newline character is LF Internal link size = 2 POSIX malloc threshold = 10 Default match limit = 10000000 Match recursion uses stack Also note that the version on the hosting server (the same version PHP is compiled against) is pretty old. What confuses me though, is that pcretest fails on both servers from the command line with re> ~[^\\pL\d]+~u ** Unknown option 'u' although this regexp works fine when run from PHP on my server. So, I guess my questions are does the regular expression fail on the hosting server because of the lack of Unicode properties? Or is there something else that I'm missing? Thanks all, Gaz.

    Read the article

  • Problem with apostrophes and other special characters when using aspell in windows

    - by Loftx
    Hi there, We seem to be having a problem with the spell checker on our content management system where it marks the ve part of We’ve as a misspelling. The spellchecker uses aspell which is called from a script on the server which executes the cmd.exe and uses it to pipe a file into aspell (it's a long winded way I know, but our server side programming langauge (ColdFusion) doesn't support writing to stdin for executables). Aspell is called by executing: c:\windows\system32\cmd.exe /c type d:\path_to_file\file.txt | "C:\Program Files\Aspell\bin\aspell" --lang=en -a Where file.txt contains the text to be spelled e.g. ^Oh have We’ve (the carat is added to prevent piping problems I believe). Aspell then output: @(#) International Ispell Version 3.1.20 (but really Aspell 0.50.3) * * * & ve 62 12: vie, voe, V, v, veg, vet, Be, Ce, be, Ev, E, e, vex, VA, VI, Va, Vi, vi, we, VD, VF, VG, VJ, VP, VT, Vt, vb, vs, DE, De, Fe, GE, Ge, He, IE, Le, ME, Me, NE, Ne, OE, PE, Re, SE, Se, Te, Xe, he, me, re, ye, Ave, Eve, Ive, ave, eve, VAR, var, veer, vier, view, vow However, we have a dev site, with the same version of Aspell, and when the same file is used it outputs with no misspellings. Both servers are running Aspell 0.50.3 on Windows server 2003, but there could be other differences in configuration: @(#) International Ispell Version 3.1.20 (but really Aspell 0.50.3) I'm wondering if the problem is to do with the piping part of the process or something different in the Aspell configuration. Does anyone have any ideas? Cheers, Tom

    Read the article

  • IIS, Web services, Time out error

    - by Eduard
    Hello, We’ve got problem with ASP.NET web application that uses web services of other system. I’ll describe our system architecture: we have web application and Windows services that uses the same web services. - Windows service works all the time and sends information to these web services once an hour. - Web application is designed for users to send the same information in manual behavior. The problem is when user sometimes tries to send information in manual behavior in the web application, .NET throws exception „The operation has timed out” (web?). At that time Windows service successfully sends all necessary information to these web services. IT stuff that supports these web services asserts that there was no any request from our web application at that time. Then we have restarted IIS (iisreset) and everything has started to work fine. This situation repeats all the time. There is no anti-virus or firewall on the server. My suggestion is that there is something wrong with IIS, patches, configuration or whatever? The only specific thing is that there are requests that can least 2 minutes (web service response wait time). We tried to reproduce this situation on our local test servers, but everything works fine. OS: Windows Server 2003 R2 .NET: 3.5

    Read the article

  • Writing to a log4net FileAppender with multiple threads performance problems

    - by Wayne
    TickZoom is a very high performance app which uses it's own parallelization library and multiple O/S threads for smooth utilization of multi-core computers. The app hits a bottleneck where users need to write information to a LogAppender from separate O/S threads. The FileAppender uses the MinimalLock feature so that each thread can lock and write to the file and then release it for the next thread to write. If MinimalLock gets disabled, log4net reports errors about the file being already locked by another process (thread). A better way for log4net to do this would be to have a single thread that takes care of writing to the FileAppender and any other threads simply add their messages to a queue. In that way, MinimalLock could be disabled to greatly improve performance of logging. Additionally, the application does a lot of CPU intensive work so it will also improve performance to use a separate thread for writing to the file so the CPU never waits on the I/O to complete. So the question is, does log4net already offer this feature? If so, how do you do enable threaded writing to a file? Is there another, more advanced appender, perhaps? If not, then since log4net is already wrapped in the platform, that makes it possible to implement a separate thread and queue for this purpose in the TickZoom code. Sincerely, Wayne

    Read the article

  • Problem with MessageContract, Generic return types and clientside naming

    - by Soeteman
    I'm building a web service which uses MessageContracts, because I want to add custom fields to my SOAP header. In a previous topic, I learned that a composite response has to be wrapped. For this purpose, I devised a generic ResponseWrapper class. [MessageContract(WrapperNamespace = "http://mynamespace.com", WrapperName="WrapperOf{0}")] public class ResponseWrapper<T> { [MessageBodyMember(Namespace = "http://mynamespace.com")] public T Response { get; set; } } I made a ServiceResult base class, defined as follows: [MessageContract(WrapperNamespace = "http://mynamespace.com")] public class ServiceResult { [MessageBodyMember] public bool Status { get; set; } [MessageBodyMember] public string Message { get; set; } [MessageBodyMember] public string Description { get; set; } } To be able to include the request context in the response, I use a derived class of ServiceResult, which uses generics: [MessageContract(WrapperNamespace = "http://mynamespace.com", WrapperName = "ServiceResultOf{0}")] public class ServiceResult<TRequest> : ServiceResult { [MessageBodyMember] public TRequest Request { get; set; } } This is used in the following way [OperationContract()] ResponseWrapper<ServiceResult<HCCertificateRequest>> OrderHealthCertificate(RequestContext<HCCertificateRequest> context); I expected my client code to be generated as ServiceResultOfHCCertificateRequest OrderHealthCertificate(RequestContextOfHCCertificateRequest context); Instead, I get the following: ServiceResultOfHCCertificateRequestzSOTD_SSj OrderHealthCertificate(CompType1 c1, CompType2 c2, HCCertificateRequest context); CompType1 and CompType2 are properties of the RequestContext class. The problem is that a hash is added to the end of ServiceResultOfHCCertificateRequestzSOTD_SSj. How do I need define my generic return types in order for the client type to be generated as expected (without the hash)?

    Read the article

  • Subversion versus Vault

    - by WebDude
    I'm currently reviewing the benefits of moving from SVN to a SourceGear Vault. Has anyone got advice or a link to a detailed comparison between the two? Bear in mind I would have to move my current Source Control system across which works strongly in SVN's favor Here is some info I have found out thus far from my own investigations. I have been taking some time tests between the two and vault seems to perform most operations much faster. Time tests used the same server as the repository, the same workstation client, and the same project. Time Comparisons SVN Add/Commit    12:30 Get Latest Revision    5:35 Tagging/Labelling    0:01 Branching    N/A - I don't think true branching exists in SVN Vault Add/Commit    4:45 Get Latest Revision    0:51 Tagging/Labelling    0:30 Branching    3:23 (can't get this to format correctly) I also found an online source comparing some other points. This is the kind of information i'm looking for. Usage Comparisons Subversion is edit/merge/commit only. Vault allows you to do either edit/merge/commit or checkout/edit/checkin. Vault looks and acts just like VSS, which makes the learning curve effectively zero for VSS users. Vault has a VS plugin, but it only works if you're going to run in checkout-mode. Subversion has clients for pretty much every OS you can imagine; Vault has a GUI client for Windows and a command line client for Mono. Both will support remote work, since both use HTTP as their transport (Subversion uses extended DAV, Vault uses SOAP). Subversion installation, especially w/ Apache, is more complex. Subversion has a lot of third party support. Vault has just a few things. My question Has anyone got advice or a link to a detailed comparison between the two?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >