Search Results

Search found 3471 results on 139 pages for 'docs'.

Page 15/139 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • phpDocumentor to wiki?

    - by Mike
    My company is working on end user/developer docs using mediawiki. I'd like to take a lot of the API docs that phpdoc spit out (for specific functions/methods) and have it in wiki markup for easy transfer. Any solutions out there for getting wiki markup or wiki output from phpDoc? I've looked around and found nothing.

    Read the article

  • How to insert item which type is QString to a ComboBox in Qt?

    - by Hanny
    I have an item which type is QString: for example: QString name="Name" the question is: How can I put this name variable into a combobox? I've read the Qt 4.6.2 docs but still cant figure out how to do that. Please dont refer me back to that Qt 4.6.2 docs again.. Please help me ASAP!! this is really urgent! thank you very much!! :)

    Read the article

  • Can I get memory usage of processes running on the monitored server by newrelic REST API

    - by youlin
    according to the newrelic faq https://docs.newrelic.com/docs/server/server-monitor-faq, The Server Monitoring agent can report Top 20 processes that are using significant memory or I/O and I can view the memory usage of the processes on the newrelic portal page. However, I do not find any clue about how to get this metrics by newrelic REST API (I can get the CPU usage of processes by REST API). Is it possible to do this?

    Read the article

  • offline Wordpress documentation

    - by baiano
    I am working on a wordpress theme and am looking for a downloaded set of documentation files so that I can access them even when I don't have access to the internet. I am looking for something similar to the PHP docs found on: http://www.php.net/download-docs.php Does anyone know where I might find something like that for wordpress?

    Read the article

  • How To Read C# Code From Java Background

    - by ChloeRadshaw
    I come from a Java background and have been using C# for the last year - So far the API docs I use are at MSDN (http://msdn.microsoft.com/en-us/library/ms132397.aspx). I tend to use the lightweight C# docs. What annoys me about that is that I don't see one page with details of the class, a list of members, a list of methods and properties like I would with a Java API definition. Is this possible?

    Read the article

  • Facebook Connect: Redirect to FB login page instead of popup

    - by AyKarsi
    I'm just starting of to get Facebook connect to work. One thing that really bugs me, is that when I click the FB login button, it opens up a popup. I would much more prefer to have the user redirected to FB, where he can login and be sent back to my site. (Just like openid login works here at SO) I can't find anything in the docs, although I have the feeling I'm not looking in the right place: http://developers.facebook.com/docs/reference/javascript/fb.api/

    Read the article

  • WCF fails to deserialize correct(?) response message security headers (Security header is empty)

    - by Soeteman
    I'm communicating with an OC4J webservice, using a WCF client. The client is configured as follows: <basicHttpBinding> <binding name="MyBinding"> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm=""/> <message clientCredentialType="UserName" algorithmSuite="Default"/> </security> </binding> My clientcode looks as follows: ServicePointManager.CertificatePolicy = new AcceptAllCertificatePolicy(); string username = ConfigurationManager.AppSettings["user"]; string password = ConfigurationManager.AppSettings["pass"]; // client instance maken WebserviceClient client = new WebserviceClient(); client.Endpoint.Binding = new BasicHttpBinding("MyBinding"); // credentials toevoegen client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; //uitvoeren request var response = client.Ping(); I've altered the CertificatePolicy to accept all certificates, because I need to insert Charles (ssl proxy) in between client and server to intercept the actual Xml that is sent across te wire. My request looks as follows: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2010-04-01T09:47:01.161Z</u:Created> <u:Expires>2010-04-01T09:52:01.161Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-9b39760f-d504-4e53-908d-6125a1827aea-21"> <o:Username>user</o:Username> <o:Password o:Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username- token-profile-1.0#PasswordText">pass</o:Password> </o:UsernameToken> </o:Security> </s:Header> <s:Body> <getPrdStatus xmlns="http://mynamespace.org/wsdl"> <request xmlns="" xmlns:a="http://mynamespace.org/wsdl" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <a:IsgrStsRequestTypeUser> <a:prdCode>LEPTO</a:prdCode> <a:sequenceNumber i:nil="true" /> <a:productionType i:nil="true" /> <a:statusDate>2010-04-01T11:47:01.1617641+02:00</a:statusDate> <a:ubn>123456</a:ubn> <a:animalSpeciesCode>RU</a:animalSpeciesCode> </a:IsgrStsRequestTypeUser> </request> </getPrdStatus> </s:Body> </s:Envelope> In return, I receive the following response: <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://mynamespace.org/wsdl"> <env:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" env:mustUnderstand="1" /> </env:Header> <env:Body> <ns0:getPrdStatusResponse> <result> <ns0:IsgrStsResponseTypeUser> <ns0:prdCode>LEPTO</ns0:prdCode> <ns0:color>green</ns0:color> <ns0:stsCode>LEP1</ns0:stsCode> <ns0:sequenceNumber xsi:nil="1" /> <ns0:productionType xsi:nil="1" /> <ns0:IAndRCode>00</ns0:IAndRCode> <ns0:statusDate>2010-04-01T00:00:00.000+02:00</ns0:statusDate> <ns0:description>Gecertificeerd vrij</ns0:description> <ns0:ubn>123456</ns0:ubn> <ns0:animalSpeciesCode>RU</ns0:animalSpeciesCode> <ns0:name>gecertificeerd vrij</ns0:name> <ns0:ranking>17</ns0:ranking> </ns0:IsgrStsResponseTypeUser> </result> </ns0:getPrdStatusResponse> </env:Body> </env:Envelope> Why can't WCF deserialize this response header? I'm getting a "Security header is empty" exception: Server stack trace: at System.ServiceModel.Security.ReceiveSecurityHeader.Process(TimeSpan timeout) at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessageCore(Message& message, TimeSpan timeout) at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessage(Message& message, TimeSpan timeout) at System.ServiceModel.Security.SecurityProtocol.VerifyIncomingMessage(Message& message, TimeSpan timeout, SecurityProtocolCorrelationState[] correlationStates) at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.ProcessReply(Message reply, SecurityProtocolCorrelationState correlationState, TimeSpan timeout) at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Who knows what is going on here? I've already tried Rick Strahl's suggestion and removed the timestamp from the request header. Any help greatly appreciated!

    Read the article

  • Configuring Oracle HTTP Server 12c for WebLogic Server Domain

    - by Emin Askerov
    Oracle HTTP Server (OHS) 12c 12.1.2 which was released in July 2013 as a part of Oracle Web Tier 12c is the web server component of Oracle Fusion Middleware. In essence this is Apache HTTP Server 2.2.22 (with critical bug fixes from higher versions) which includes modules developed specifically by Oracle. It provides a listener functionality for Oracle WebLogic Server and the framework for hosting static pages, dynamic pages, and applications over the Web. OHS can be easily managed by Weblogic Management Framework, a set of tools which provides administrative capabilities (start, stop, lifecycle operations, etc.) for Oracle Fusion Middleware products. In other words all tools which are familiar to us (Node Manager, WLST, Administration Console, Fusion Middleware Control etc.) presented as a part of Weblogic Management Framework and using for managing Java and System Components both for Weblogic Server and Standalone Domain types. You can familiarize yourself with these terms using related documentation: 1. Introduction to Oracle HTTP Server: http://docs.oracle.com/middleware/1212/webtier/index.html 2. Weblogic Management Framework: http://docs.oracle.com/middleware/1212/core/ASCON/terminology.htm#ASCON11260 In the given post I would like to cover rather simple use case how to configure OHS as web proxy in Weblogic Cluster environment. For example, we have existing Weblogic Domain where some managed servers have been joined to cluster and host business applications. We need to configure web proxy component which will act as entry point, load balancer for our cluster for user requests. Of course, we could install old good Apache HTTP Server and configure mod_wl plugin. However this solution not optimal from manageability perspective: we need to install Apache, install additional plugin then configure it by editing configuration file which is not really convenient for FMW Administrators and often increase time of performing of simple administrative task. Alternatively, we could use OHS as System Component within Weblogic Domain and use full power of Weblogic Management Framework in order to configure, manage and monitor it! I like this idea! What about you? I hope after reading this post you will agree with me. First of all it is necessary to download OHS binaries. You can use this link for downloading: http://www.oracle.com/technetwork/java/webtier/downloads/index2-303202.html As we will use Fusion Middleware Control for managing OHS instances it is necessary to extend your domain with Enterprise Manager and Oracle ADF and JRF templates. This is not topic for focusing in this post, but you could get more information from documentation or one of my previous posts: http://docs.oracle.com/middleware/1212/wls/WLDTR/fmw_templates.htm#sthref64 https://blogs.oracle.com/imc/entry/the_specifics_of_adf_12c Note: you should have properly configured Node Manager utility for managing OHS instances Let’s consider configuration process step by step: 1. Shut down all Weblogic instances of existing domain including Admin Server; 2. Install Oracle HTTP Server. You should use your Fusion Middleware Home Path (e.g. /u01/Oracle/FMW12) for Installation Location and select Colocated HTTP Server option as Installation Type. I will not focus on this topic in this post. All information related to OHS installation you could find here: http://docs.oracle.com/middleware/1212/webtier/WTINS/install_gui.htm#i1082009 3. Next we need to extend our existing domain with OHS component. In order to do this you should do the following: a. Run Fusion Middleware Configuration Wizard (ORACLE_HOME/oracle_common/common/bin/config.sh); b. On the step 1 select Update an existing domain option and point your Fusion Middleware Home Path; c. On the step 2 check Oracle HTTP Server, Oracle Enterprise Manager Plugin for WEBTIER templates; d. Go through other steps without any changes and finish configuration process. 4. Start Admin Server and all managed servers related to your cluster 5. Log in to Enterprise Manager FMW Control using http://<hostname>:<port>/em URL 6. Now we will create OHS instance within our Weblogic Domain Infrastructure. Navigate to Weblogic Domain -> Administration -> Create/Delete OHS menu item; 7. Enter to edit mode, clicking Changes -> Lock&Edit menu item; 8. Create new OHS instance clicking Create button; 9. Define Instance Name (e.g. DevOSH) and Machine parameters; 10. Now we need to define listen port. By default OHS will use 7777 port number for income HTTP requests. We could change it to any free port number we would like to use. In order to do it, right click on our created OHS instance (left hand panel) and navigate to Administration -> Port Configuration; 11. Click on record with port number 7777 and then click Edit button; 12. Change port number value (in our case this will be 8080) and then click OK button; 13. Now we need to edit mod_wl_ohs configuration in order to enable OHS to act as proxy for WebLogic Server Instances/Cluster; 14. In order to do it right click on our created OHS instance (left panel) and navigate to Administration -> mod_wl_ohs Configuration; a. In Weblogic Cluster you should enter cluster address (define <host:port> for all managed servers which participated in cluster), e.g: 192.168.56.2:7004,192.168.56.2:7005 b. Define Weblogic Port parameter at which the Oracle WebLogic Server host is listening for connection requests from the module (or from other servers); c. Check Dynamic Server List option. This will dynamically update cluster list for every request; d. In the Location table define list of endpoint locations which you would like to process. In order to do this click Add Row button and define Location, Weblogic Cluster, Path Trim and Path Prefix parameters (if required); e. Click Apply button in order to save changes. 15. Activate changes clicking Changes ? Activate Changes menu item; 16. Finally we will start configured OHS instance. Right click on OHS instance tree item under Web Tier folder, select Control -> Start Up menu item; 17. Ensure that OHS instance up and running and then test your environment. Run deployed application to your Weblogic Cluster accessing via OHS web proxy; Normal 0 false false false RU X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Apache virtual host does not work properly

    - by Jori
    I have read a lot of information all over the Internet regarding this subject, and can not figure out what I'am doing wrong. I'm trying to host two websites under different names locally under Windows 7 with Apaches Virtual Hosting functionality. This is what I have done already: In the httpd.conf file I uncommented the following line, so that the virtual host configuration file will be included in the main configuration sequence. # Virtual hosts Include conf/extra/httpd-vhosts.conf This is how I edited my httpd-vhosts.conf: # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot "C:/apache/docs/dummy-host.localhost" # ServerName dummy-host.localhost # ServerAlias www.dummy-host.localhost # ErrorLog "logs/dummy-host.localhost-error.log" # CustomLog "logs/dummy-host.localhost-access.log" common #</VirtualHost> # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot "C:/apache/docs/dummy-host2.localhost" # ServerName dummy-host2.localhost # ErrorLog "logs/dummy-host2.localhost-error.log" # CustomLog "logs/dummy-host2.localhost-access.log" common #</VirtualHost> <VirtualHost *:80> ServerName arterieur DocumentRoot "J:/webcontent/www20" <Directory "J:/webcontent/www20"> Order allow,deny Allow from all </Directory> </VirtualHost> As you can see I commented the Virtual Host examples out and added my own one (I did one for this example). Also am I sure that J:\webcontent\www20 exists. At last I edited the Windows host file located in: C:\Windows\System32\drivers\etc\hosts, now it looks this: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 arterieur Then I restarted Apache with the Apache Service Monitor, and it gave me the following fatal error: The requested operation has failed!, I tried to look at the apache/logs/error.log file but I did not log anything, I guess it only logs the errors after startup. Does anyone knows what I'am doing wrong?

    Read the article

  • Lenovo X220 right click does not work with ubuntu 12.04

    - by fulop
    I am unable to right click with my new X220 Lenovo sub-notebook. I have read several workaround but even not know which one would help me. Can someone help me to find the solution or workaround? dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): -D_FORTIFY_SOURCE=2 dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions -Wl,-z,relro dpkg-buildpackage: source package xserver-xorg-input-synaptics dpkg-buildpackage: source version 1.6.2-1ubuntu1~precise2 dpkg-buildpackage: source changed by Timo Aaltonen <[email protected]> dpkg-buildpackage: host architecture amd64 dpkg-source --before-build xserver-xorg-input-synaptics-1.6.2 fakeroot debian/rules clean dh clean --with quilt,autoreconf,xsf --builddirectory=build/ dh_testdir -O--builddirectory=build/ dh_auto_clean -O--builddirectory=build/ dh_quilt_unpatch -O--builddirectory=build/ Removing patch 131_reset-num_active_touches-on-deviceoff.patch Restoring src/synaptics.c Removing patch 130_dont_enable_rightbutton_area.patch Restoring conf/50-synaptics.conf Removing patch 129_disable_three_touch_tap.patch Restoring src/synaptics.c Removing patch 128_disable_three_click_action.patch Restoring src/synaptics.c Removing patch 126_ubuntu_xi22.patch Restoring configure.ac Removing patch 125_option_rec_revert.patch Restoring test/fake-symbols.h Restoring test/fake-symbols.c Removing patch 124_syndaemon_events.patch Restoring tools/syndaemon.c Removing patch 118_quell_error_msg.patch Restoring tools/synclient.c Restoring tools/syndaemon.c Removing patch 115_evdev_only.patch Restoring conf/50-synaptics.conf Removing patch 106_always_enable_vert_edge_scroll.patch Restoring src/synaptics.c Removing patch 104_always_enable_tapping.patch Restoring src/synaptics.c Removing patch 103_enable_cornertapping.patch Restoring src/synaptics.c Removing patch 101_resolution_detect_option.patch Restoring include/synaptics-properties.h Restoring man/synaptics.man Restoring src/synapticsstr.h Restoring src/properties.c Restoring src/synaptics.c Restoring tools/synclient.c Removing patch 02-do-not-use-synaptics-for-keyboards.patch Restoring conf/11-x11-synaptics.fdi No patches applied dh_autoreconf_clean -O--builddirectory=build/ dh_clean -O--builddirectory=build/ dpkg-source -b xserver-xorg-input-synaptics-1.6.2 dpkg-source: warning: no source format specified in debian/source/format, see dpkg-source(1) dpkg-source: info: using source format `1.0' dpkg-source: info: building xserver-xorg-input-synaptics using existing xserver-xorg-input-synaptics_1.6.2.orig.tar.gz dpkg-source: info: building xserver-xorg-input-synaptics in xserver-xorg-input-synaptics_1.6.2-1ubuntu1~precise2.diff.gz dpkg-source: warning: the diff modifies the following upstream files: autogen.sh docs/README.alps docs/tapndrag.dia docs/trouble-shooting.txt dpkg-source: info: use the '3.0 (quilt)' format to have separate and documented changes to upstream files, see dpkg-source(1) dpkg-source: info: building xserver-xorg-input-synaptics in xserver-xorg-input-synaptics_1.6.2-1ubuntu1~precise2.dsc debian/rules build dh build --with quilt,autoreconf,xsf --builddirectory=build/ dh_testdir -O--builddirectory=build/ dh_quilt_patch -O--builddirectory=build/ Applying patch 02-do-not-use-synaptics-for-keyboards.patch patching file conf/11-x11-synaptics.fdi Hunk #1 succeeded at 9 (offset 7 lines). Applying patch 101_resolution_detect_option.patch patching file include/synaptics-properties.h patching file man/synaptics.man patching file src/properties.c Hunk #3 succeeded at 787 (offset 6 lines). patching file src/synaptics.c Hunk #2 succeeded at 1403 (offset 3 lines). Hunk #3 succeeded at 1421 (offset 3 lines). patching file src/synapticsstr.h patching file tools/synclient.c Applying patch 103_enable_cornertapping.patch patching file src/synaptics.c Hunk #1 succeeded at 762 with fuzz 1 (offset 202 lines). Applying patch 104_always_enable_tapping.patch patching file src/synaptics.c Hunk #1 succeeded at 662 with fuzz 2 (offset 6 lines). Applying patch 106_always_enable_vert_edge_scroll.patch patching file src/synaptics.c Hunk #1 succeeded at 673 (offset 174 lines). Applying patch 115_evdev_only.patch patching file conf/50-synaptics.conf Hunk #1 succeeded at 14 with fuzz 2. Applying patch 118_quell_error_msg.patch patching file tools/synclient.c patching file tools/syndaemon.c Applying patch 124_syndaemon_events.patch patching file tools/syndaemon.c Applying patch 125_option_rec_revert.patch patching file test/fake-symbols.c patching file test/fake-symbols.h Applying patch 126_ubuntu_xi22.patch patching file configure.ac Applying patch 128_disable_three_click_action.patch patching file src/synaptics.c Hunk #1 succeeded at 671 (offset 174 lines). Applying patch 129_disable_three_touch_tap.patch patching file src/synaptics.c Hunk #1 succeeded at 665 (offset 32 lines). Applying patch 130_dont_enable_rightbutton_area.patch patching file conf/50-synaptics.conf Applying patch 131_reset-num_active_touches-on-deviceoff.patch patching file src/synaptics.c Applying patch 201-wait.patch patching file src/eventcomm.c Hunk #1 FAILED at 750. Hunk #2 FAILED at 775. Hunk #3 FAILED at 784. 3 out of 3 hunks FAILED -- rejects in file src/eventcomm.c Patch 201-wait.patch does not apply (enforce with -f) dh_quilt_patch: quilt --quiltrc /dev/null push -a || test $? = 2 returned exit code 1 make: *** [build] Error 25 dpkg-buildpackage: error: debian/rules build gave error exit status 2

    Read the article

  • Using section header in Sendgrid

    - by Zefiryn
    I am trying to send emails through sendgrid in Zend application. I copy the php code from the sendgrid documentation (smtapi class and swift). I create a template with places that should be substituted with %variable%. Now I create headers for sendgrid as defined here: http://docs.sendgrid.com/documentation/api/smtp-api/developers-guide/ In result I get something looking like this: { "to": ["[email protected]", "[email protected]", "[email protected]", "[email protected]", "[email protected]"], "sub": {"%firstname%": ["Benny", "Chaim", "Ephraim", "Yehuda", "will"]}, "section": {"%postername%": "Rabbi Yitzchak Lieblich", "%postermail%": "[email protected]", "%categoryname%": "General", "%threadname%": "Completely new thread", "%post%": "This thread is to inform you about something very important", "%threadurl%": "http:\/\/hb.local\/forums\/general\/thread\/143", "%replyto%": "http:\/\/hb.local\/forums\/general\/thread\/143", "%unsubscribeurl%": "http:\/\/hb.local\/forums\/settings\/", "%subscribeurl%": "http:\/\/hb.local\/forums\/subscribe-thread\/id\/143\/token\/1b20eb7799829e22ba2d48ca0867d3ce"} } Now while all data defined in "sub" changes I cannot make section work. In the final email I still got %postername%. When I move this data to sub and repeat them for each email everything is working fine. Has anyone a clue what I am doing wrong? Docs for section are here: http://docs.sendgrid.com/documentation/api/smtp-api/developers-guide/section-tags/

    Read the article

  • How to implement Priority Queues in Python?

    - by dragosrsupercool
    Sorry for such a silly question but Python docs are confusing.. . Link 1: Queue Implementation http://docs.python.org/library/queue.html It says thats Queue has a contruct for priority queue. But I could not find how to implement it. class Queue.PriorityQueue(maxsize=0) Link 2: Heap Implementation http://docs.python.org/library/heapq.html Here they says that we can implement priority queues indirectly using heapq pq = [] # list of entries arranged in a heap entry_finder = {} # mapping of tasks to entries REMOVED = '<removed-task>' # placeholder for a removed task counter = itertools.count() # unique sequence count def add_task(task, priority=0): 'Add a new task or update the priority of an existing task' if task in entry_finder: remove_task(task) count = next(counter) entry = [priority, count, task] entry_finder[task] = entry heappush(pq, entry) def remove_task(task): 'Mark an existing task as REMOVED. Raise KeyError if not found.' entry = entry_finder.pop(task) entry[-1] = REMOVED def pop_task(): 'Remove and return the lowest priority task. Raise KeyError if empty.' while pq: priority, count, task = heappop(pq) if task is not REMOVED: del entry_finder[task] return task raise KeyError('pop from an empty priority queue' Which is the most efficient priority queue implementation in python? And how to implement it?

    Read the article

  • Xapian gem failed to install on Mac OS X Snow Leopard + macports

    - by goodwill
    I have installed xapian-core + xapian-bindings with macports on snow leopard, then trying to install xapian gem fails: Building native extensions. This could take a while... ERROR: Error installing xapian: ERROR: Failed to build gem native extension. /opt/ruby-enterprise/bin/ruby extconf.rb ./configure --with-ruby checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... ./install-sh -c -d checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... yes checking how to create a ustar tar archive... gnutar checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... GNU checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking for a sed that does not truncate output... /usr/bin/sed checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for ld used by gcc... /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld checking if the linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) is GNU ld... no checking for /usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm checking whether ln -s works... yes checking how to recognize dependent libraries... pass_all checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking how to run the C++ preprocessor... g++ -E checking the maximum length of command line arguments... 196608 checking command to parse /usr/bin/nm output from gcc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking for dsymutil... dsymutil checking for nmedit... nmedit checking for -single_module linker flag... yes checking for -exported_symbols_list linker flag... yes checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fno-common checking if gcc PIC flag -fno-common works... yes checking if gcc static flag -static works... no checking if gcc supports -c -o file.o... yes checking whether the gcc linker (/usr/libexec/gcc/i686-apple-darwin10/4.2.1/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin10.3.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no checking whether we are using the GNU C++ compiler... (cached) yes checking whether g++ accepts -g... (cached) yes checking dependency style of g++... (cached) gcc3 checking for xapian-config... /opt/local/bin/xapian-config checking /opt/local/bin/xapian-config works... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for ruby1.8... no checking for ruby... /opt/ruby-enterprise/bin/ruby checking /opt/ruby-enterprise/bin/ruby version... ruby 1.8.7 (2009-06-12 patchlevel 174) [i686-darwin10.2.0], MBARI 0x6770, Ruby Enterprise Edition 2009.10 checking for /opt/ruby-enterprise/lib/ruby/1.8/i686-darwin10.2.0/ruby.h... yes checking ruby/io.h... no checking whether to use -fvisibility=hidden... yes configure: creating ./config.status config.status: creating Makefile config.status: creating xapian-version.h config.status: creating python/Makefile config.status: creating python/docs/Makefile config.status: creating php/Makefile config.status: creating php/docs/Makefile config.status: creating java/Makefile config.status: creating java/native/Makefile config.status: creating java/org/xapian/Makefile config.status: creating java/org/xapian/errors/Makefile config.status: creating java/org/xapian/examples/Makefile config.status: creating java-swig/Makefile config.status: creating tcl8/Makefile config.status: creating tcl8/docs/Makefile config.status: creating tcl8/pkgIndex.tcl config.status: creating csharp/Makefile config.status: creating csharp/docs/Makefile config.status: creating csharp/AssemblyInfo.cs config.status: creating ruby/Makefile config.status: creating ruby/docs/Makefile config.status: creating xapian-bindings.spec config.status: creating python/generate-python-exceptions config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands *** Building bindings for languages: ruby make make all-recursive Making all in ruby make all-recursive Making all in docs make all-am make[5]: Nothing to be done for `all-am'. /bin/sh ../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I.. -I/opt/ruby-enterprise/lib/ruby/1.8/i686-darwin10.2.0 -I/opt/ruby-enterprise/lib/ruby/1.8/i686-darwin10.2.0 -fno-strict-aliasing -Wall -Wno-unused -Wno-uninitialized -fvisibility=hidden -I/opt/local/include -g -O2 -MT xapian_wrap.lo -MD -MP -MF .deps/xapian_wrap.Tpo -c -o xapian_wrap.lo xapian_wrap.cc ../libtool: line 393: /bin/sed: No such file or directory ../libtool: line 393: /bin/sed: No such file or directory ../libtool: line 792: /bin/sed: No such file or directory : ignoring unknown tag ../libtool: line 792: /bin/sed: No such file or directory *** Warning: inferring the mode of operation is deprecated. *** Future versions of Libtool will require --mode=MODE be specified. ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1103: /bin/sed: No such file or directory ../libtool: line 1156: /bin/sed: No such file or directory : compile: cannot determine name of library object from `' make[4]: *** [xapian_wrap.lo] Error 1 make[3]: *** [all-recursive] Error 1 make[2]: *** [all] Error 2 make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2 extconf.rb:3:in `system!': unhandled exception from extconf.rb:6 Gem files will remain installed in /opt/ruby-enterprise/lib/ruby/gems/1.8/gems/xapian-1.0.15 for inspection. Results logged to /opt/ruby-enterprise/lib/ruby/gems/1.8/gems/xapian-1.0.15/gem_make.out Any idea pal?

    Read the article

  • How to count term frequency for set of documents?

    - by ManBugra
    i have a Lucene-Index with following documents: doc1 := { caldari, jita, shield, planet } doc2 := { gallente, dodixie, armor, planet } doc3 := { amarr, laser, armor, planet } doc4 := { minmatar, rens, space } doc5 := { jove, space, secret, planet } so these 5 documents use 14 different terms: [ caldari, jita, shield, planet, gallente, dodixie, armor, amarr, laser, minmatar, rens, jove, space, secret ] the frequency of each term: [ 1, 1, 1, 4, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1 ] for easy reading: [ caldari:1, jita:1, shield:1, planet:4, gallente:1, dodixie:1, armor:2, amarr:1, laser:1, minmatar:1, rens:1, jove:1, space:2, secret:1 ] What i do want to know now is, how to obtain the term frequency vector for a set of documents? for example: Set<Documents> docs := [ doc2, doc3 ] termFrequencies = magicFunction(docs); System.out.pring( termFrequencies ); would result in the ouput: [ caldari:0, jita:0, shield:0, planet:2, gallente:1, dodixie:1, armor:2, amarr:1, laser:1, minmatar:0, rens:0, jove:0, space:0, secret:0 ] remove all zeros: [ planet:2, gallente:1, dodixie:1, armor:2, amarr:1, laser:1 ] Notice, that the result vetor contains only the term frequencies of the set of documents. NOT the overall frequencies of the whole index! The term 'planet' is present 4 times in the whole index but the source set of documents only contains it 2 times. A naive implementation would be to just iterate over all documents in the docs set, create a map and count each term. But i need a solution that would also work with a document set size of 100.000 or 500.000. Is there a feature in Lucene i can use to obtain this term vector? If there is no such feature, how would a data structure look like someone can create at index time to obtain such a term vector easily and fast? I'm not that Lucene expert so i'am sorry if the solution is obvious or trivial.

    Read the article

  • Rhino Commons and Rhino Mocks Reference Documents?

    - by Ogre Psalm33
    Ok, is it just me, or does there seem to be a lack of (easy to find) reference documentation for Rhino Commons and Rhino Mocks? My coworkers have started using Rhino Mocks and Rhino Commons (particularly the NHibernate stuff), and I found a few tutorial-ish examples, which were good. But when I see them making use of a class in their code--let's pick something like Rhino.Commons.NHRepository, for example--I have been having a hard time just finding someplace on the web that tells me what Rhino.Commons.NHRepository is or what it does. I like to learn by looking at real examples, but using this approach, it's very handy to look at what the full docs are for a class, instead of just the current context. Similarly, I saw IaMockedRepository.Expect(...) being used in some code, but it took me forever to finally find this page that explains the AAA syntax for Rhino Mocks, which made it clear to me. I've found the Ayende.com wiki on Rhino Commons, but that seems to have a number of broken links. To me, the Rhino libraries seem like a great set of libraries in need of some desperate community help in the documentation area (Of course, as we all know, documentation is not the forte of most coders, and incomplete docs are all too common). Does anyone know if this is something in the works, someplace that some volunteer documenters are needed, or is there some great reference docs out there that I have somehow missed to Rhino Mocks and Rhino Commons?

    Read the article

  • [Python] name 'OptionGroup' is not defined

    - by Cawas
    Ok, so I made this rookie mistake below, but in my defense I was led to it thanks to how the help about this subject is on python docs, which states how to use optparse. It is actually an error under the gigantic tutorial section. In contrast and to my offense, I may be one of the very few stupid people who can't read very well and pay close attention on what I do. But since this took me so long to discover, I wanted to "document" it here: Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) >>> from optparse import OptionParser >>> outputGroup = OptionGroup(parser, 'Output handling') Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'OptionGroup' is not defined This is strictly done with examples found on the docs, and you can't find anything about it anywhere, be it that long long docs page, google or stackoverflow. Plus, reading optparse.py shows OptionGroup is there, so that adds to the confusion. I bet it will take less than 1 minute for someone to spot my error. For that I'll only add proper tags and / or modify the title later on. :)

    Read the article

  • Renaming TurboGears 2's Repoze Fields with TGAdmin

    - by William Chambers
    I've been working on renaming TurboGears 2's Repoze 'groups' field to 'roles' to free the namespace and db tables for other purposes. Also roles makes much more sense to me then groups because I have a strong Drupal background. Now I have found some of the docs to do this such as these: http://www.turbogears.org/2.1/docs/main/Auth/Customization.html#customizing-the-model-structure-assumed-by-the-quickstart http://code.gustavonarea.net/repoze.what-quickstart/#customizing-the-model-definition However these only go part of the way. I have made (I'm pretty sure at least, I've double checked a few times.) all the changes required as you can see in this diff. This seems to work fine however I've ran into a rather major issue with the TurboGears Admin system. I've tried http://turbogears.org/2.0/docs/main/Extensions/Admin/index.html and it didn't seem to make any difference, however I'm not 100% sure I did it correctly. The problem occurs when I attempt to go to localhost/admin/permissions/. It causes a Internal Server Error and outputs the following error. http://pastebin.com/YWMH3SiU This error does not happen on the Roles/Users pages and the permissions /edit/1 also works. I'm running kubuntu 10.04 with TG 2.1b2. (I'm running the beta mostly for easier mako support which is really important.) Any help would be very appreciated.

    Read the article

  • SQLite dataypes lengths?

    - by XF
    I'm completely new to SQLite (actually 5 minutes ago), but I do know somewhat the Oracle and MySql backends. The question: I'm trying to know the lengths of each of the datatypes supported by SQLite, such as the differences between a bigint and a smallint. I've searched across the SQLite documentation (only talks about affinity, only matters it?), SO threads, google... and found nothing. My guess: I've just slightly revised the SQL92 specifications, which talk about datatypes and its relations but not about its lengths, which is quite obvious I assume. Yet I've come accross the Oracle and MySql datatypes specs, and the specified lengths are mostly identical for integers at least. Should I assume SQLite is using the same lengths? Aside question: Have I missed something about the SQLite docs? Or have I missed something about SQL in general? Asking this because I can't really understand why the SQLite docs don't specify something as basic as the datatypes lengths. It just doesn't make sense to me! Although I'm sure there is a simple command to discover the lengths.. but why not writing them to the docs? Thank you!

    Read the article

  • What do I have to change in my PHP/CURL code to retrieve data from a https:// URL?

    - by Edward Tanguay
    I have a PHP file using CURL that accepts a Google Doc URL as a parameter, then returns the plain text of the Google Doc. It worked well until recently when apparently a redirect was added so that the http:// address redirects to the equivalent https:// address, as in this example: http://docs.google.com/View?id=dc7gj86r_20dn2csqg3 So I changed my code to access the https:// address, but it just returns blank. What do I have to change my CURL code so that I can get the HTML text from the https:// address? $url = filter_input(INPUT_GET, 'url',FILTER_SANITIZE_STRING); $validUrlPrefixes[] = "https://docs.google.com"; if(beginsWithOneOfThese($url, $validUrlPrefixes)) { $user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729)'; $ch = curl_init(); curl_setopt($ch, CURLOPT_COOKIEJAR, "/tmp/cookie"); curl_setopt($ch, CURLOPT_COOKIEFILE, "/tmp/cookie"); curl_setopt($ch, CURLOPT_URL, $url ); curl_setopt($ch, CURLOPT_FAILONERROR, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); curl_setopt($ch, CURLOPT_TIMEOUT, 15); curl_setopt($ch, CURLOPT_USERAGENT, $user_agent); curl_setopt($ch, CURLOPT_VERBOSE, 0); $rawData = curl_exec($ch); $rawData = cleanText($rawData); if(beginsWith($url, "https://docs.google.com")) { echo qstr::convertGoogleDocContentToText($rawData); die; } echo $rawData; die;

    Read the article

  • Cocoa NSTextField Drag & Drop Requires Subclass... Really?

    - by ipmcc
    Until today, I've never had occasion to use anything other than an NSWindow itself as an NSDraggingDestination. When using a window as a one-size-fits-all drag destination, the NSWindow will pass those messages on to its delegate, allowing you to handle drops without subclassing NSWindow. The docs say: Although NSDraggingDestination is declared as an informal protocol, the NSWindow and NSView subclasses you create to adopt the protocol need only implement those methods that are pertinent. (The NSWindow and NSView classes provide private implementations for all of the methods.) Either a window object or its delegate may implement these methods; however, the delegate’s implementation takes precedence if there are implementations in both places. Today, I had a window with two NSTextFields on it, and I wanted them to have different drop behaviors, and I did not want to allow drops anywhere else in the window. The way I interpret the docs, it seems that I either have to subclass NSTextField, or make some giant spaghetti-conditional drop handlers on the window's delegate that hit-checks the draggingLocation against each view in order to select the different drop-area behaviors for each field. The centralized NSWindow-delegate-based drop handler approach seems like it would be onerous in any case where you had more than a small handful of drop destination views. Likewise, the subclassing approach seems onerous regardless of the case, because now the drop handling code lives in a view class, so once you accept the drop you've got to come up with some way to marshal the dropped data back to the model. The bindings docs warn you off of trying to drive bindings by setting the UI value programmatically. So now you're stuck working your way back around that too. So my question is: "Really!? Are those the only readily available options? Or am I missing something straightforward here?" Thanks.

    Read the article

  • Mercurial repository narrow clone?

    - by Berry Langerak
    Hi. I'm currently in the process of moving from Subversion to Mercurial, and I have to say I don't regret that decision. However, when trying to convert my project, I ran into a problem of Mercurial, which I can't seem to get fixed. I have two distinct projects: one is a framework, and the other is an application that relies on that framework. Here's what the repositories look like: The Framework repository: docs/ deploy/ lib/ tests/ The Application repository: application/ config/ lib/ tests/ www/ What I'd like is for the application's lib directory to contain a copy of the frameworks' lib/ directory. I used to do this using svn:externals. Now, I am aware that Mercurial supports the concept of subrepositories, but that doesn't seem like the "correct" solution, as it doesn't actually pull in the lib/ directory like I wanted, as you'll still have to pull and push changes manually. That, plus once you clone the framework repository, you'll get all of it, not just the lib/ directory. I only need the lib/ directory, not the tests, or the docs. Now, I thought up two different solutions to this problem, but I wonder which is the best. The first solution would be to clone the framework in a different directory altogether and create symlink in the application's lib/ directory which points to the framework's lib/ directory. Putting the symlink in .hgignore should make sure all is well, I think? That means that you could edit the frameworks code, and commit that, and you could edit the application's code and commit that, too. The other option is to have multiple repositories. The framework gets pulled as a whole, which means you'll get the docs/, deploy/, test/ etc. directories, which are not needed for usage of the framework. I thought maybe creating a repository purely for the library might be a solution, although I sincerely doubt it, as the Unit Tests are very dependant upon the library itself. Does anyone know a decent solution for this problem?

    Read the article

  • Basic user authentication with records in AngularFire

    - by ajkochanowicz
    Having spent literally days trying the different, various recommended ways to do this, I've landed on what I think is the most simple and promising. Also thanks to the kind gents from this SO question: Get the index ID of an item in Firebase AngularFire Curent setup Users can log in with email and social networks, so when they create a record, it saves the userId as a sort of foreign key. Good so far. But I want to create a rule so twitter2934392 cannot read facebook63203497's records. Off to the security panel Match the IDs on the backend Unfortunately, the docs are inconsistent with the method from is firebase user id unique per provider (facebook, twitter, password) which suggest appending the social network to the ID. The docs expect you to create a different rule for each of the login method's ids. Why anyone using 1 login method would want to do that is beyond me. (From: https://www.firebase.com/docs/security/rule-expressions/auth.html) So I'll try to match the concatenated auth.provider with auth.id to the record in userId for the respective registry item. According to the API, this should be as easy as In my case using $registry instead of $user of course. { "rules": { ".read": true, ".write": true, "registry": { "$registry": { ".read": "$registry == auth.id" } } } } But that won't work, because (see the first image above), AngularFire sets each record under an index value. In the image above, it's 0. Here's where things get complicated. Also, I can't test anything in the simulator, as I cannot edit {some: 'json'} To even authenticate. The input box rejects any input. My best guess is the following. { "rules": { ".write": true, "registry": { "$registry": { ".read": "data.child('userId').val() == (auth.provider + auth.id)" } } } } Which both throws authentication errors and simultaneously grants full read access to all users. I'm losing my mind. What am I supposed to do here?

    Read the article

  • PHP is truncating MSSQL Blob data (4096b), even after setting INI values. Am I missing one?

    - by Dutchie432
    I am writing a PHP script that goes through a table and extracts the varbinary(max) blob data from each record into an external file. The code is working perfectly, except when a file is over 4096b - the data is truncated at exactly 4096. I've modified the values for mssql.textlimit, mssql.textsize, and odbc.defaultlrl without any success. Am I missing something here? <?php ini_set("mssql.textlimit" , "2147483647"); ini_set("mssql.textsize" , "2147483647"); ini_set("odbc.defaultlrl", "0"); include_once('common.php'); $id=$_REQUEST['i']; $q = odbc_exec($connect, "Select id,filename,documentBin from Projectdocuments where id = $id"); if (odbc_fetch_row($q)){ echo "Trying $filename ... "; $fileName="projectPhotos/docs/".odbc_result($q,"filename"); if (file_exists($fileName)){ unlink($fileName); } if($fh = fopen($fileName, "wb")) { $binData=odbc_result($q,"documentBin"); fwrite($fh, $binData) ; fclose($fh); $size = filesize($fileName); echo ("$fileName<br />Done ($size)<br><br>"); }else { echo ("$fileName Failed<br>"); } } ?> OUTPUT Trying ... projectPhotos/docs/file1.pdf Done (4096) Trying ... projectPhotos/docs/file2.zip Done (4096) Trying ... projectPhotos/docsv3.pdf Done (4096) etc..

    Read the article

  • How can I get access to password hashing in postgresql? Tried installing postgresql-contrib in ubun

    - by Tchalvak
    So I'm trying to just hash some passwords in postgresql, and the only hashing solution that I've found for postgresql is part of the pgcrytpo package ( http://www.postgresql.org/docs/8.3/static/pgcrypto.html ) that is supposed to be in postgresql-contrib ( http://www.postgresql.org/docs/8.3/static/contrib.html ). So I installed postgresql-contrib, (sudo apt-get install postgresql-contrib), restarted my server (as a simple way to restart postgresql). However, I still don't have access to any of the functions for hashing that are supposed to be in postgresql-contrib, e.g.: ninjawars=# select crypt('global salt' || 'new password' || 'user created date', gen_salt('sha256')); ERROR: function gen_salt(unknown) does not exist ninjawars=# select digest('test', 'sha256') from players limit 1; ERROR: function digest(unknown, unknown) does not exist ninjawars=# select hmac('test', 'sha256') from players limit 1; ERROR: function hmac(unknown, unknown) does not exist So how can I hash passwords in postgresql, on ubuntu?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >