Search Results

Search found 2006 results on 81 pages for 'xxx'.

Page 21/81 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Debugging Messaging Exception

    - by rizza
    We have a batch program that incorporates JavaMail 1.2 that sends emails. In our development environment, we haven't got the chance to encounter the above mentioned exception. But in the client's environment, they had experienced this a lot of times with the following error trace: javax.mail.MessagingException: 550 Requested action not taken: NUL characters are not allowed. at com.sun.mail.smtp.SMTPTransport.issueCommand (SMTPTransport.java: 879) at com.sun.mail.smtp.SMTPTransport.finishData (SMTPTransport.java: 820) at com.sun.mail.smtp.SMTPTransport.sendMessage (SMTPTransport.java: 322) ... I'm not sure if this is connected to my problem, http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4697158. But trying JavaMail 1.4.2, I see that the content transfer encoding of the email is still 7bit, so I'm not sure if using JavaMail 1.4.2 could solve the problem. Please take note that I could only do testing in our development environment that hasn't been able to replicate this. With the above exception, how would i know if this is from the sender or the receiver side? What debugging steps could you suggest? EDIT: Here is a DEBUG of the actual sending (masked some information): DEBUG: not loading system providers in &lt;java.home&gt;</a>/lib DEBUG: not loading optional custom providers file: /META-INF/javamail.providers DEBUG: successfully loaded default providers DEBUG: Tables of loaded providers DEBUG: Providers Listed By Class Name: {com.sun.mail.smtp.SMTPTransport=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc], com.sun.mail.imap.IMAPStore=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3Store=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc]} DEBUG: Providers Listed By Protocol: {imap=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], pop3=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc], smtp=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc]} DEBUG: not loading optional address map file: /META-INF/javamail.address.map DEBUG: getProvider() returning javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc] DEBUG SMTP: useEhlo true, useAuth false DEBUG: SMTPTransport trying to connect to host "nnn.nnn.n.nnn", port nn DEBUG SMTP RCVD: 220 xxxx.xxxxxxxxxxx.xxx SMTP; Mon, 23 Mar 2009 15:18:57 +0800 DEBUG: SMTPTransport connected to host "nnn.nnn.n.nnn", port: nn DEBUG SMTP SENT: EHLO xxxxxxxxx DEBUG SMTP RCVD: 250 xxxx.xxxxxxxxxxx.xxx Hello DEBUG SMTP: use8bit false DEBUG SMTP SENT: MAIL FROM:<a href="newmsg.cgi?mbx=Main&to=xxxx@xxxxxxxxxxx.xxx">&lt;[email protected]&gt;</a> DEBUG SMTP RCVD: 250 <a href="newmsg.cgi?mbx=Main&to=xxxx@xxxxxxxxxxx.xxx">&lt;[email protected]&gt;</a>... Sender ok DEBUG SMTP SENT: RCPT TO:&lt;[email protected]&gt; DEBUG SMTP RCVD: 250 &lt;[email protected]&gt;... Recipient ok Verified Addresses &nbsp;&nbsp;[email protected] DEBUG SMTP SENT: DATA DEBUG SMTP RCVD: 354 Enter mail, end with "." on a line by itself DEBUG SMTP SENT: . DEBUG SMTP RCVD: 550 Requested action not taken: NUL characters are not allowed.

    Read the article

  • .NET client connecting to IBM MQ over SSL

    - by user171523
    I got key files from our client where I need to use them to connect to MQ over SSL. The files we have got from client are: xxx.crl xxx.kdb xxx.rdb xxx.sth xxx.tab They said client channel table in that. I am trying to connect using the below code. And they are saying I don't need to specify the Queue Manager it will be defined in the Client Channel Table. But one thing is they have done while created key with the using "user1". Code: Hashtable connectionProperties = new Hashtable(); // Add the connection type connectionProperties.Add(MQC.TRANSPORT_PROPERTY, connectionType); MQQueueManager qMgr; MQEnvironment.SSLCipherSpec = "TRIPLE_DES_SHA_US"; MQEnvironment.SSLKeyRepository = @"D:\Cert\BB\key"; MQEnvironment.UserId = "user1"; MQEnvironment.properties.Add(MQC.TRANSPORT_PROPERTY, connectionType); qMgr = new MQQueueManager(); Error I am getting: Message = "MQRC_Q_MGR_NAME_ERROR" I also tried telneting the server which I am able to do. Can some help me what is wrong I am doing here and why I am getting this error.

    Read the article

  • Mixed-mode C++/CLI crashing: heap corruption in atexit (static destructor registration)

    - by thaimin
    I am working on deploying a program and the codebase is a mixture of C++/CLI and C#. The C++/CLI comes in all flavors: native, mixed (/clr), and safe (/clr:safe). In my development environment I create a DLL of all the C++/CLI code and reference that from the C# code (EXE). This method works flawlessly. For my releases that I want to release a single executable (simply stating that "why not just have a DLL and EXE separate?" is not acceptable). So far I have succeeded in compiling the EXE with all the different sources. However, when I run it I get the "XXXX has stopped working" dialog with options to Check online, Close and Debug. The problem details are as follows: Problem Event Name: APPCRASH Fault Module Name: StackHash_8d25 Fault Module Version: 6.1.7600.16559 Fault Module Timestamp: 4ba9b29c Exception Code: c0000374 Exception Offset: 000cdc9b OS Version: 6.1.7600.2.0.0.256.48 Locale ID: 1033 Additional Information 1: 8d25 Additional Information 2: 8d25552d834e8c143c43cf1d7f83abb8 Additional Information 3: 7450 Additional Information 4: 74509ce510cd821216ce477edd86119c If I debug and send it to Visual Studio, it reports: Unhandled exception at 0x77d2dc9b in XXX.exe: A heap has been corrupted Choosing break results in it stopping at ntdll.dll!77d2dc9b() with no additional information. If I tell Visual Studio to continue, the program starts up fine and seems to work without incident, probably since a debugger is now attached. What do you make of this? How do I avoid this heap corruption? The program seems to work fine except for this. My abridged compilation script is as follows (I have omitted my error checking for brevity): @set TARGET=x86 @set TARGETX=x86 @set OUT=%TARGETX% @call "%VS90COMNTOOLS%\..\..\VC\vcvarsall.bat" %TARGET% @set WIMGAPI=C:\Program Files\Windows AIK\SDKs\WIMGAPI\%TARGET% set CL=/Zi /nologo /W4 /O2 /GS /EHa /MD /MP /D NDEBUG /D _UNICODE /D UNICODE /D INTEGRATED /Fd%OUT%\ /Fo%OUT%\ set INCLUDE=%WIMGAPI%;%INCLUDE% set LINK=/nologo /LTCG /CLRIMAGETYPE:IJW /MANIFEST:NO /MACHINE:%TARGETX% /SUBSYSTEM:WINDOWS,6.0 /OPT:REF /OPT:ICF /DEFAULTLIB:msvcmrt.lib set LIB=%WIMGAPI%;%LIB% set CSC=/nologo /w:4 /d:INTEGRATED /o+ /target:module :: Compiling resources omitted @set CL_NATIVE=/c /FI"stdafx-native.h" @set CL_MIXED=/c /clr /LN /FI"stdafx-mixed.h" @set CL_PURE=/c /clr:safe /LN /GL /FI"stdafx-pure.h" @set NATIVE=... @set MIXED=... @set PURE=... cl %CL_NATIVE% %NATIVE% cl %CL_MIXED% %MIXED% cl %CL_PURE% %PURE% link /LTCG /NOASSEMBLY /DLL /OUT:%OUT%\core.netmodule %OUT%\*.obj csc %CSC% /addmodule:%OUT%\core.netmodule /out:%OUT%\GUI.netmodule /recurse:*.cs link /FIXED /ENTRY:GUI.Program.Main /OUT:%OUT%\XXX.exe ^ /ASSEMBLYRESOURCE:%OUT%\core.resources,XXX.resources,PRIVATE /ASSEMBLYRESOURCE:%OUT%\GUI.resources,GUI.resources,PRIVATE ^ /ASSEMBLYMODULE:%OUT%\core.netmodule %OUT%\gui.res %OUT%\*.obj %OUT%\GUI.netmodule Update 1 Upon compiling this with debug symbols and trying again, I do in fact get more information. The call stack is: msvcr90d.dll!_msize_dbg(void * pUserData, int nBlockUse) Line 1511 + 0x30 bytes msvcr90d.dll!_dllonexit_nolock(int (void)* func, void (void)* * * pbegin, void (void)* * * pend) Line 295 + 0xd bytes msvcr90d.dll!__dllonexit(int (void)* func, void (void)* * * pbegin, void (void)* * * pend) Line 273 + 0x11 bytes XXX.exe!_onexit(int (void)* func) Line 110 + 0x1b bytes XXX.exe!atexit(void (void)* func) Line 127 + 0x9 bytes XXX.exe!`dynamic initializer for 'Bytes::Null''() Line 7 + 0xa bytes mscorwks.dll!6cbd1b5c() [Frames below may be incorrect and/or missing, no symbols loaded for mscorwks.dll] ... The line of my code that 'causes' this (dynamic initializer for Bytes::Null) is: Bytes Bytes::Null; In the header that is declared as: class Bytes { public: static Bytes Null; } I also tried doing a global extern in the header like so: extern Bytes Null; // header Bytes Null; // cpp file Which failed in the same way. It seems that the CRT atexit function is responsible, being inadvertently required due to the static initializer. Fix As Ben Voigt pointed out the use of any CRT functions (including native static initializers) requires proper initialization of the CRT (which happens in mainCRTStartup, WinMainCRTStartup, or _DllMainCRTStartup). I have added a mixed C++/CLI file that has a C++ main or WinMain: using namespace System; [STAThread] // required if using an STA COM objects (such as drag-n-drop or file dialogs) int main() { // or "int __stdcall WinMain(void*, void*, wchar_t**, int)" for GUI applications array<String^> ^args_orig = Environment::GetCommandLineArgs(); int l = args_orig->Length - 1; // required to remove first argument (program name) array<String^> ^args = gcnew array<String^>(l); if (l > 0) Array::Copy(args_orig, 1, args, 0, l); return XXX::CUI::Program::Main(args); // return XXX::GUI::Program::Main(args); } After doing this, the program now gets a little further, but still has issues (which will be addressed elsewhere): When the program is solely in C# it works fine, along with whenever it is just calling C++/CLI methods, getting C++/CLI properties, and creating managed C++/CLI objects Events added by C# into the C++/CLI code never fire (even though they should) One other weird error is that an exception happens is a InvalidCastException saying can't cast from X to X (where X is the same as X...) However since the heap corruption is fixed (by getting the CRT initialized) the question is done.

    Read the article

  • COPY TO xxxx.xls TYPE XLS (VFP 8.0 SP1)

    - by Andrea.Ko
    Hi All, I downloading table from SQL 2008 using VFP 8.0 (SP1) using command: COPY TO xxx TYPE XLS Some of the data in the excel is disappearing. Example, Table in SQL: Cus(ID int(4), CusNam VARCHAR(35)) When i issue the following command at VFP Forms: (a) COPY TO xxx TYPE FOX2x [Data display correctly as following] *ID CusNam 1 ABC 2 DEF* (b) COPY TO xxx TYPE XLS [2nd Record disappear] *ID CusNam 1 2 DEF* Appreciate for any helps!

    Read the article

  • Yahoo flagging mail as spam when using relay server

    - by modulaaron
    I'm using Postfix to relay mail from my site to my mail server. Mail is received properly at my Gmail and Hotmail accounts - only Yahoo is the problem. The Yahoo mail headers state: Received-SPF: none (mta1133.mail.mud.yahoo.com: domain of [email protected] does not designate permitted sender hosts) In contrast, the Gmail headers state: Received-SPF: pass (google.com: domain of [email protected] designates 74.50.xxx.xxx as permitted sender) client-ip=74.50.xxx.xxx; Reverse DNS is set up correctly, as is my SPF record. Does anyone have any suggestions as to what I can do to solve the Yahoo problem (short of contacting Yahoo, as this is a brand new mail server)? FYI - I just set up domainkeys, but I'm not sure whether they should be on the origin or relay server. Thanks

    Read the article

  • Regex in Notepad++

    - by bsreekanth
    Can anyone provide a regex for notepad++ for the below search and replace (conversion) ADD ( PRIMARY KEY (xxx) ) ; to ADD PRIMARY KEY (xxx) ; basically, removed a () around primary key expression. the value xxx is different among statements. If not notepad++, I may also try the regex for vim or any shell script. thanks a lot. Babu.

    Read the article

  • Suggest the best options to me to design the dynamic web interface using PHP MYSQL and AJAX

    - by Krishna
    Hello, I am designing a web interface for a company. I am describing the company's profile: company is currently having 5 branches and planning to extend their branches all over the country. it is an insurance surveying company. they are dealing with 6 Categories in the insurance domain, vide .. Engineering Fire Marine Motor Miscellaneous Risk Inspection and branches named as b1, b2, b3, b4, b5 and Extending. and finally they have contract with 22 companies. For each claim they are assign a unique ID. like contractcompany/category/serialno Ex: take a contracted company names as xxx, sss, zzz. xxx/Engineering/001 sss/Engineering/001 . . . xxx/Enginnering/002 sss/Engineering/002 . . . xxx/Fire/001 sss/Fire/001 . . . xxx/Fire/002 . . . xxx/Fire/002 . . . and so on..... by this way they issue the unique ID for each claim. Finally what i want is developing the interface with PHP mysql and ajax auto generating the unique id for each claim. store full details of the claims with reference to unique id. show all claims in one page, and they can view by branch wise and category wise. send monthly Report (All claims they have given and status of claims) to contract companies. give access to contracted companies, but they can view only their respective claims. Each claim has its own documents. So they can be uploaded by own company users or administrator. these files are associated with unique ID. contracted companies can view files. Give access to branches to enter new claims and update old claims. Administrator can create, update and delete all the claims and their details. Only administrator can grant new users (own company branches / contracted companies) Finally the the panel is completely database driven. Could any body can help. Thanks in advance Kindly do the needful and oblige Thanks and Regards Krishna. P [email protected]

    Read the article

  • Any suggestions for good automated web load testing tool?

    - by fmunkert
    What are some good automated tools for load testing (stress testing) web applications, that do not use record and replay of HTTP network packets? I am aware that there are numerous load testing tools on the market that record and replay HTTP network packets. But these are unsuitable for my purpose, because of this: The HTTP packet format changes very often in our application (e.g. when we optimize an AJAX call). We do not want to adapt all test scripts just because there is a slight change in HTTP packet format. Our test team shall not need to know any internals about our application to write their test scripts. A tool that replays HTTP packets, however, requires the team to know the format of HTTP requests and responses, such that they can adapt details of the replayed HTTP packets (e.g. user name). The automated load testing tool I am looking for should be able to let the test team write "black box" test scripts such as: Invoke web page at URL http://... . First, enter XXX into text field XXX. Then, press button XXX. Wait until response has been received from web server. Verify that text field XXX now contains the text XXX. The tool should be able to simulate up to several 1000 users, and it should be compatible with web applications using ASP.NET and AJAX.

    Read the article

  • how to configure Postfix to send more emails per hour than the default.

    - by dina-ak
    Hello; My postfix only let me send only 3600 email in an hour ( from which i conclude that there is 1s delay between each email ) while I want to send double that number .. I looked in the postfix configuration .Is there any parameters that i can change to send more than 3600 email in an hour ? this is the output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases bounce_queue_lifetime = 1d command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 default_destination_concurrency_limit = 5 default_destination_rate_delay = 0s html_directory = no inet_interfaces = all inet_protocols = ipv4 initial_destination_concurrency = 2 lmtp_destination_rate_delay = 0s local_destination_rate_delay = 0s mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man maximal_queue_lifetime = 1d mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = example.com myhostname = server01.example.com myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix qmgr_message_recipient_limit = 10000 queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.5.6/README_FILES relay_destination_rate_delay = 0s sample_directory = /usr/share/doc/postfix-2.5.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_bind_address = xxx.xxx.xxx.xxx smtp_destination_rate_delay = 0s smtp_generic_maps = hash:/etc/postfix/generic smtpd_banner = $myhostname ESMTP $mail_name smtpd_client_restrictions = check_client_access hash:/etc/postfix/access unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual virtual_destination_rate_delay = 0s

    Read the article

  • WCF - Beginners question on Address (of ABC)

    - by Lijo
    Hi Team, I am new to WCF. Following is a question on WCF. Suppose, I have a service defined as follows. The host has two addresses. I usually click on the base address http://.... to generate proxy. When the proxy is generated, will it have address of http alone? How can I generate a proxy with net.tcp. Is there any article that explains the use of net.tcp with local host and ASP.NET? service name="XXX.RRR.Common.ServiceLayer.MySL" behaviorConfiguration="returnFaults" endpoint contract="XXX.RRR.Common.ServiceLayer.IMySL" binding="netTcpBinding" bindingConfiguration="MessagingBinding" behaviorConfiguration="LargeEndpointBehavior"/ host baseAddresses add baseAddress="net.tcp://localhost:86/XXX/RRR/ManagerService" add baseAddress="http://localhost:76/XXX/RRR/ManagerService" baseAddresses host /service Thanks Lijo

    Read the article

  • WCF (REST) multiple host headers with one endpoint

    - by Maan
    I have an issue with a WCF REST service (.NET 4) which has multiple host headers, but one end point. The host headers are for example: xxx.yyy.net xxx.yyy.com Both host headers are configured in IIS over HTTPS and redirect to the same WCF service endpoint. I have an Error Handling behavior which logs some extra information in case of an error. The problem is that the logging behavior only works for one of both URLs. When I first call the .net URL, the logging is only working for requests on the .net URL. When I first call the .com URL (after a Worker Process recycle), it’s only working on requests on the .com URL. The configuration looks like this: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> <services> <service name="XXX.RemoteHostService"> <endpoint address="" behaviorConfiguration="RemoteHostEndPointBehavior" binding="webHttpBinding" bindingConfiguration="HTTPSTransport" contract="XXX.IRemoteHostService" /> </service> </services> <extensions> <behaviorExtensions> <add name="errorHandling" type="XXX.ErrorHandling.ErrorHandlerBehavior, XXX.Services, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </behaviorExtensions> </extensions> <bindings> <webHttpBinding> <binding name="HTTPSTransport"> <security mode="Transport"> <transport clientCredentialType="None"/> </security> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="RemoteHostEndPointBehavior"> <webHttp /> <errorHandling /> </behavior> </endpointBehaviors> </behaviors> …. Should I configure multiple endpoints? Or in which way could I configure the WCF Service so the logging behavior is working for both URLs? I tried several things, also solutions mentioned earlier on StackOverflow. But no luck until now...

    Read the article

  • How is a relative JMP (x86) implemented in an Assembler?

    - by Pindatjuh
    While building my assembler for the x86 platform I encountered some problems with encoding the JMP instruction: enc inst size in bytes EB cb JMP rel8 2 E9 cw JMP rel16 4 (because of 0x66 16-bit prefix) E9 cd JMP rel32 5 ... (from my favourite x86 instruction website, http://siyobik.info/index.php?module=x86&id=147) All are relative jumps, where the size of each encoding (operation + operand) is in the third column. Now my original (and thus fault because of this) design reserved the maximum (5 bytes) space for each instruction. The operand is not yet known, because it's a jump to a yet unknown location. So I've implemented a "rewrite" mechanism, that rewrites the operands in the correct location in memory, if the location of the jump is known, and fills the rest with NOPs. This is a somewhat serious concern in tight-loops. Now my problem is with the following situation: b: XXX c: JMP a e: XXX ... XXX d: JMP b a: XXX (where XXX is any instruction, depending on the to-be assembled program) The problem is that I want the smallest possible encoding for a JMP instruction (and no NOP filling). I have to know the size of the instruction at c before I can calculate the relative distance between a and b for the operand at d. The same applies for the JMP at c: it needs to know the size of d before it can calculate the relative distance between e and a. How do existing assemblers implement this, or how would you implement this? This is what I am thinking which solves the problem: First encode all the instructions to opcodes between the JMP and it's target, and if this region contains a variable-sized opcode, use the maximum size, i.e. 5 for JMP. Then in some conditions, the JMP is oversized (because it may fit in a smaller encoding): so another pass will search for oversized JMPs, shrink them, and move all instructions ahead), and set absolute branching instructions (i.e. external CALLs) after this pass is completed. I wonder, perhaps this is an over-engineered solution, that's why I ask this question.

    Read the article

  • cant get ifstream to work in XCode

    - by segfault
    No matter what I try, I cant get the following code to work correctly. ifstream inFile; inFile.open("sampleplanet"); cout << (inFile.good()); //prints a 1 int levelLW = 0; int numLevels = 0; inFile >> levelLW >> numLevels; cout << (inFile.good()); //prints a 0 at the first cout << (inFile.good());, it prints a 1 and at the second a 0. Which tells me that the file is opening correctly, but inFile is failing as soon as read in from it. The file has more then enough lines/characters, so there is no way I have tried to read past the end of the file by that point. File contents: 8 2 #level 2 XXXXXXXX X......X X..X..XX X.X....X X..XX..X XXXX...X X...T..X XXX..XXX #level 1 XXXXXXXX X......X X..X.XXX X.X..X.X X..XX..X X......X X^....SX XXX.^XXX

    Read the article

  • NHibernate: Select entire entity plus aggregate columns

    - by cbp
    I want to return an entire entity, along with some aggregate columns. In SQL I would use an inner select, something like this: SELECT TOP 10 f.*, inner_query.[average xxx] FROM ( SELECT f.Id, AVG(fb.xxx) AS [average xxx] FROM foobar fb INNER JOIN foo f ON f.FoobarId = fb.Id ) AS inner_query INNER JOIN foo f ON f.Id = inner_query.Id Is this possible with CreateCriteria?

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

  • Can not find Driver when using generic database bundle

    - by Marc
    I have a project that is build up from several OSGi bundles. One of them is a generic Database bundle that defines a DataSource that can be used throughout the project. The spring bean definition of this service is: <osgi:service interface="javax.sql.DataSource"> <bean class="org.postgresql.ds.PGPoolingDataSource"> <property name="databaseName" value="xxx" /> <property name="serverName" value="xxx" /> <property name="user" value="xxx" /> <property name="password" value="xxx" /> </bean> </osgi:service> Now, when using this DataSource is a different bundle, we get an error: No suitable driver found for jdbc:postgresql://localhost/xxx I have tried the following to add the org.postgresql.Driver to the DriverManager: Instantiated an empty bean for that Driver in the spring context, like this: <bean class="org.postgresql.Driver" /> Instantiated the Driver statically in one of the classes, like this: Class.forName("org.postgresql.Driver"); Added a file META-INF\services\java.sql.Driver with the content org.postgresql.Driver None of these solutions seems to help.

    Read the article

  • Apache 13 permission denied in user's home directory

    - by Dave
    Hi, My friend's website was working fine until he moved the document root from /var/www/xxx to /home/user/xxx Apache give 13 permission denied error messages when we try to access the site via a web browser. The site is configured as a virtual directory. All the Apache configurations were unchanged (except for the directory change). We tried to chmod 777 /home/user/xxx, chown apache /home/user/xxx. But they didn't work. Is there some kind of security feature set on the user's home directories? The server OS is CentOS (Godaddy VPS). Any help is appreciated! Thanks!

    Read the article

  • Optimize MySQL query (ngrams, COUNT(), GROUP BY, ORDER BY)

    - by Gerardo
    I have a database with thousands of companies and their locations. I have implemented n-grams to optimize search. I am making one query to retrieve all the companies that match with the search query and another one to get a list with their locations and the number of companies in each location. The query I am trying to optimize is the latter. Maybe the problem is this: Every company ('anunciante') has a field ('estado') to make logical deletes. So, if 'estado' equals 1, the company should be retrieved. When I run the EXPLAIN command, it shows that it goes through almost 40k rows, when the actual result (the reality matching companies) are 80. How can I optimize this? This is my query (XXX represent the n-grams for the search query): SELECT provincias.provincia AS provincia, provincias.id, COUNT(*) AS cantidad FROM anunciantes JOIN anunciante_invertido AS a_i0 ON anunciantes.id = a_i0.id_anunciante JOIN indice_invertido AS indice0 ON a_i0.id_invertido = indice0.id LEFT OUTER JOIN domicilios ON anunciantes.id = domicilios.id_anunciante LEFT OUTER JOIN localidades ON domicilios.id_localidad = localidades.id LEFT OUTER JOIN provincias ON provincias.id = localidades.id_provincia WHERE anunciantes.estado = 1 AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') AND indice0.id IN (SELECT invertido_ngrama.id_palabra FROM invertido_ngrama JOIN ngrama ON ngrama.id = invertido_ngrama.id_ngrama WHERE ngrama.ngrama = 'XXX') GROUP BY provincias.id ORDER BY cantidad DESC And this is the query explained (hope it can be read in this format): id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY anunciantes ref PRIMARY,estado estado 1 const 36669 Using index; Using temporary; Using filesort 1 PRIMARY domicilios ref id_anunciante id_anunciante 4 db84771_viaempresas.anunciantes.id 1 1 PRIMARY localidades eq_ref PRIMARY PRIMARY 4 db84771_viaempresas.domicilios.id_localidad 1 1 PRIMARY provincias eq_ref PRIMARY PRIMARY 4 db84771_viaempresas.localidades.id_provincia 1 1 PRIMARY a_i0 ref PRIMARY,id_anunciante,id_invertido PRIMARY 4 db84771_viaempresas.anunciantes.id 1 Using where; Using index 1 PRIMARY indice0 eq_ref PRIMARY PRIMARY 4 db84771_viaempresas.a_i0.id_invertido 1 Using index 6 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 6 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 5 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 5 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 4 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 4 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 3 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 3 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index 2 DEPENDENT SUBQUERY ngrama const PRIMARY,ngrama ngrama 5 const 1 Using index 2 DEPENDENT SUBQUERY invertido_ngrama eq_ref PRIMARY,id_palabra,id_ngrama PRIMARY 8 func,const 1 Using index

    Read the article

  • How to eliminate duplicate nodes bases on values of multiple attributes?

    - by JayRaj
    Hello All, How can I eliminate duplicate nodes based on values of multiple (more than 1) attributes? Also the attribute names are passed as parameters to the stylesheet. Now I am aware of the Muenchian method of grouping that uses a <xsl:key> element. But I came to know that XSLT 1.0 does not allow paramters/variables in <xsl:key>. Is there another method(s) to achieve duplicate nodes removal? It is fine if it not as efficient as the Munechian method. Update from previus question: XML: <data id = "root"> <record id="1" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="2" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="3" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="4" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="5" operator1='xxx' operator2='lkj' operator3='tyu'/> <record id="6" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="7" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="8" operator1='abc' operator2='yyy' operator3='zzz'/> <record id="9" operator1='xxx' operator2='yyy' operator3='zzz'/> <record id="10" operator1='rrr' operator2='yyy' operator3='zzz'/> </data>

    Read the article

  • openDatabase Hello World - 2

    - by cf_PhillipSenn
    This is a continuation from a previous stackoverflow question. I've renamed some variables so that I can tell what are keywords and what are names that I can control. Q: Why is the deleteRow function not working? <html> <head> <title>html5 openDatabase Hello World</title> <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1"); google.setOnLoadCallback(OnLoadCallback); function OnLoadCallback() { var dbo; dbo = openDatabase('HelloWorld'); dbo.transaction( function(T1) { T1.executeSql( 'CREATE TABLE IF NOT EXISTS myTable ' + ' (myTableID INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, ' + ' Field1 TEXT NOT NULL );' ); } ); dbo.transaction(function(T2) { T2.executeSql('SELECT * FROM myTable',[], function (T6, result) { for (var i=0; i < result.rows.length; i++) { var row = result.rows.item(i); $('#savedData').append('<li id="'+row.myTableID+'">' + row.Field1 + '</li>'); } }, errorHandler); }); $('form').submit(function() { var xxx = $('#xxx').val(); dbo.transaction( function(T3) { T3.executeSql( 'INSERT INTO myTable (Field1) VALUES (?);', [xxx], function(){ $('#savedData').append('<li id="ThisisWhereIneedHELP">' + xxx + '</li>'); $('#xxx').val(''); }, errorHandler ); } ); return false; }); $('#savedData > li').live('click', function (){ deleteRow(this.id); $(this).remove(); }); } function deleteRow(myTableID) { alert('trying to delete'); dbo.transaction(function(T4) { T4.executeSql('DELETE FROM myTable WHERE myTableID = ?', [myTableID], function(){ alert('Deleted!'); }, errorHandler); }); } function errorHandler(T5, error) { alert('Oops. Error was '+error.message+' (Code '+error.code+')'); // T5.executeSql('INSERT INTO errors (code, message) VALUES (?, ?);', // [error.code, error.message]); return false; } </script> </head> <body> <form method="post"> <input name="xxx" id="xxx" /> <p> <input type="submit" name="OK" /> </p> <ul id="savedData"> </ul> </form> </body> </html>

    Read the article

  • How to send reminder notifications to subscribers for renewal from SQL Server 2005?

    - by codemonkie
    I have a table in SQL DB, namely dbo.subscribers, it contains following columns: -SubscriberID -JoinDateTime The business logic says a subscription last for 2 weeks and a reminder should be sent after 7 days from the JoinDateTime. The way that the system was designed to send reminders are via a URL call, e.g. http://xxx.xxx.xxx.xxx/renew_userid=SubscriberID/ and that can only be called from our webserver which is the only whitelisted IP machines given. Currently there is a windows service written to query the DB once a day at midnight to grab all expiring subscribers and send them reminders, however this batch approach only sends reminders to the nearest date, well, I could have set the interval down from 1 day to 1 hour such that the service can send notifications out closer to the exact JoinDateTime + 7 days requirements. I have heard a stored procedure can be written and perform task like this to a nearly real-time manner, if yes, please give me some hints on how to do it. Another question is - is SSRS bit of an overkill to perform things like this? Please advice. TIA

    Read the article

  • How can I redirect all traffic from one domain to another with an .htaccess file?

    - by George Edison
    Say I have a subdomain xxx.yyy.com running Apache. The files are stored in /home/someone/public_html/xxx. What I want to do is redirect all requests to a domain name zzz.com which is using the same location for its files. (In other words, xxx.yyy.com and zzz.com are aliases for each other) I just want people accessing zzz.com, so if someone goes to xxx.yyy.com they should be redirected to zzz.com. Can this easily be done with a rewrite rule in an .htaccess file?

    Read the article

  • Workaround for PHP SOAP request failure when wsdl defines service port binding as https and port 80?

    - by scooterhanson
    I am consuming a SOAP web service using php5's soap extension. The service' wsdl was generated using Axis java2wsdl, and whatever options are used during generation result in the port binding url being listed as https://xxx.xxx.xxx.xxx**:80** If I download the wsdl to my server, remove the port 80 specification from the port binding location value, and reference the local file in my soapclient call it works fine. However, if I try to reference it remotely (or download it and reference it locally, as-is) the call fails with a soap fault. I have no input into the service side so I can't make them change their wsdl-generation process. So, unless there's a way to make the soapclient ignorant of the port, I'm stuck with using a locally modified copy of someone else' wsdl (which I'd rather not do). Any thoughts on how to make my soapclient ignore the port 80?

    Read the article

  • How to remove caracters like (), ' * [] form a grep results with grep, awk or sed?

    - by easyyu
    For example if I made a file with grep that give me a next result: 16 Jan 07:18:42 (name1), xx.210.49.xx), 16 Jan 07:19:14 (name2), xx.210.xx.24), 16 Jan 07:19:17 (name3), xx.140.xxx.79), 16 Jan 07:19:44 (name4), xx.210.49.xx), 16 Jan 07:19:56 (name5), xx.140.xxx.79), ,then how to sed awk or grep to remove all except date name and IP to look like this: 16 Jan 07:18:42 name1 xx.210.49.xx 16 Jan 07:19:14 name2 xx.210.xx.24 16 Jan 07:19:17 name3 xx.140.xxx.79 16 Jan 07:19:44 name4 xx.210.49.xx 16 Jan 07:19:56 name5 xx.140.xxx.79 My grep command look like this: grep 'double' $DAEMON | awk -F" " '{print $2" "$1" "$3" "$8" "$10}' > $DBLOG Thx.

    Read the article

  • Right way to access the Google Cloud Storage bucket via Public API

    - by SyBer
    I'm trying the following request to access the bucket by using curl, via the public API: curl -X POST -H 'Content-Type: image/jpeg' -d @xxx.jpeg 'https://www.googleapis.com/upload/storage/v1/b/clips.eyecam.com/o?uploadType=media&name=x.jpeg&key=XXX' With XXX being the generated key in the Public API. However I'm getting an authorization failure: { "error": { "errors": [ { "domain": "global", "reason": "required", "message": "Login Required", "locationType": "header", "location": "Authorization" } ], "code": 401, "message": "Login Required" } } Seems the request is incorrect and does not pass the authorization key, any idea what would be the right form of the request?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >