Search Results

Search found 3420 results on 137 pages for 'precompiled headers'.

Page 135/137 | < Previous Page | 131 132 133 134 135 136 137  | Next Page >

  • mfc tab control switch tabs

    - by MRM
    I created a simple tab control that has 2 tabs (each tab is a different dialog). The thing is that i don't have any idea how to switch between tabs (when the user presses Titlu Tab1 to show the dialog i made for the first tab, and when it presses Titlu Tab2 to show my other dialog). I added a handler for changing items, but i don't know how should i acces some kind of index or child for tabs. Tab1.h and Tab2.h are headers for dialogs that show only static texts with the name of the each tab. There may be an obvious answer to my question, but i am a real newbie in c++ and MFC. This is my header: // CTabControlDlg.h : header file // #pragma once #include "afxcmn.h" #include "Tab1.h" #include "Tab2.h" // CCTabControlDlg dialog class CCTabControlDlg : public CDialog { // Construction public: CCTabControlDlg(CWnd* pParent = NULL); // standard constructor // Dialog Data enum { IDD = IDD_CTABCONTROL_DIALOG }; protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support // Implementation protected: HICON m_hIcon; // Generated message map functions virtual BOOL OnInitDialog(); afx_msg void OnSysCommand(UINT nID, LPARAM lParam); afx_msg void OnPaint(); afx_msg HCURSOR OnQueryDragIcon(); DECLARE_MESSAGE_MAP() public: CTabCtrl m_tabcontrol1; CTab1 m_tab1; CTab2 m_tab2; afx_msg void OnTcnSelchangeTabcontrol(NMHDR *pNMHDR, LRESULT *pResult); }; And this is the .cpp: // CTabControlDlg.cpp : implementation file // #include "stdafx.h" #include "CTabControl.h" #include "CTabControlDlg.h" #ifdef _DEBUG #define new DEBUG_NEW #endif // CAboutDlg dialog used for App About class CAboutDlg : public CDialog { public: CAboutDlg(); // Dialog Data enum { IDD = IDD_ABOUTBOX }; protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support // Implementation protected: DECLARE_MESSAGE_MAP() }; CAboutDlg::CAboutDlg() : CDialog(CAboutDlg::IDD) { } void CAboutDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); } BEGIN_MESSAGE_MAP(CAboutDlg, CDialog) END_MESSAGE_MAP() // CCTabControlDlg dialog CCTabControlDlg::CCTabControlDlg(CWnd* pParent /*=NULL*/) : CDialog(CCTabControlDlg::IDD, pParent) { m_hIcon = AfxGetApp()->LoadIcon(IDR_MAINFRAME); } void CCTabControlDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); DDX_Control(pDX, IDC_TABCONTROL, m_tabcontrol1); } BEGIN_MESSAGE_MAP(CCTabControlDlg, CDialog) ON_WM_SYSCOMMAND() ON_WM_PAINT() ON_WM_QUERYDRAGICON() //}}AFX_MSG_MAP ON_NOTIFY(TCN_SELCHANGE, IDC_TABCONTROL, &CCTabControlDlg::OnTcnSelchangeTabcontrol) END_MESSAGE_MAP() // CCTabControlDlg message handlers BOOL CCTabControlDlg::OnInitDialog() { CDialog::OnInitDialog(); // Add "About..." menu item to system menu. // IDM_ABOUTBOX must be in the system command range. ASSERT((IDM_ABOUTBOX & 0xFFF0) == IDM_ABOUTBOX); ASSERT(IDM_ABOUTBOX < 0xF000); CMenu* pSysMenu = GetSystemMenu(FALSE); if (pSysMenu != NULL) { CString strAboutMenu; strAboutMenu.LoadString(IDS_ABOUTBOX); if (!strAboutMenu.IsEmpty()) { pSysMenu->AppendMenu(MF_SEPARATOR); pSysMenu->AppendMenu(MF_STRING, IDM_ABOUTBOX, strAboutMenu); } } // Set the icon for this dialog. The framework does this automatically // when the application's main window is not a dialog SetIcon(m_hIcon, TRUE); // Set big icon SetIcon(m_hIcon, FALSE); // Set small icon // TODO: Add extra initialization here CTabCtrl* pTabCtrl = (CTabCtrl*)GetDlgItem(IDC_TABCONTROL); m_tab1.Create(IDD_TAB1, pTabCtrl); TCITEM item1; item1.mask = TCIF_TEXT | TCIF_PARAM; item1.lParam = (LPARAM)& m_tab1; item1.pszText = _T("Titlu Tab1"); pTabCtrl->InsertItem(0, &item1); //Pozitionarea dialogului CRect rcItem; pTabCtrl->GetItemRect(0, &rcItem); m_tab1.SetWindowPos(NULL, rcItem.left, rcItem.bottom + 1, 0, 0, SWP_NOSIZE | SWP_NOZORDER ); m_tab1.ShowWindow(SW_SHOW); // al doilea tab m_tab2.Create(IDD_TAB2, pTabCtrl); TCITEM item2; item2.mask = TCIF_TEXT | TCIF_PARAM; item2.lParam = (LPARAM)& m_tab1; item2.pszText = _T("Titlu Tab2"); pTabCtrl->InsertItem(0, &item2); //Pozitionarea dialogului //CRect rcItem; pTabCtrl->GetItemRect(0, &rcItem); m_tab2.SetWindowPos(NULL, rcItem.left, rcItem.bottom + 1, 0, 0, SWP_NOSIZE | SWP_NOZORDER ); m_tab2.ShowWindow(SW_SHOW); return TRUE; // return TRUE unless you set the focus to a control } void CCTabControlDlg::OnSysCommand(UINT nID, LPARAM lParam) { if ((nID & 0xFFF0) == IDM_ABOUTBOX) { CAboutDlg dlgAbout; dlgAbout.DoModal(); } else { CDialog::OnSysCommand(nID, lParam); } } // If you add a minimize button to your dialog, you will need the code below // to draw the icon. For MFC applications using the document/view model, // this is automatically done for you by the framework. void CCTabControlDlg::OnPaint() { if (IsIconic()) { CPaintDC dc(this); // device context for painting SendMessage(WM_ICONERASEBKGND, reinterpret_cast<WPARAM>(dc.GetSafeHdc()), 0); // Center icon in client rectangle int cxIcon = GetSystemMetrics(SM_CXICON); int cyIcon = GetSystemMetrics(SM_CYICON); CRect rect; GetClientRect(&rect); int x = (rect.Width() - cxIcon + 1) / 2; int y = (rect.Height() - cyIcon + 1) / 2; // Draw the icon dc.DrawIcon(x, y, m_hIcon); } else { CDialog::OnPaint(); } } // The system calls this function to obtain the cursor to display while the user drags // the minimized window. HCURSOR CCTabControlDlg::OnQueryDragIcon() { return static_cast<HCURSOR>(m_hIcon); } void CCTabControlDlg::OnTcnSelchangeTabcontrol(NMHDR *pNMHDR, LRESULT *pResult) { // TODO: Add your control notification handler code here *pResult = 0; }

    Read the article

  • Trouble on setting SSL certificates for Virtual Hosts using Apache\Phusion Passenger in localhost

    - by user502052
    I am using Ruby on Rails 3 and I would like to make to work HTTPS connections on localhost. I am using: Apache v2 + Phusion Passenger Mac OS + Snow Leopard v10.6.6 My Ruby on Rails installation use the Typhoeus gem (it is possible to use the Ruby net\http library but the result doesn't change) to make HTTP requests over HTTPS. I created self-signed ca.key, pjtname.crt and pjtname.key as detailed on the Apple website. Notice: Following instruction from the Apple website, on running the openssl req -new -key server.key -out server.csr command (see the link) at this point Common Name (eg, YOUR name) []: (this is the important one) I entered *pjtname.com so that is valid for all sub_domain of that site. In my Apache httpd.conf I have two virtual hosts configured in this way: # Secure (SSL/TLS) connections #Include /private/etc/apache2/extra/httpd-ssl.conf # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include /private/etc/apache2/other/*.conf # Passenger configuration LoadModule passenger_module /Users/<my_user_name>/.rvm/gems/ruby-1.9.2-p136/gems/passenger-3.0.2/ext/apache2/mod_passenger.so PassengerRoot /Users/<my_user_name>/.rvm/gems/ruby-1.9.2-p136/gems/passenger-3.0.2 PassengerRuby /Users/<my_user_name>/.rvm/wrappers/ruby-1.9.2-p136/ruby # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off # Ensure that Apache listens on port 443 Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:80 NameVirtualHost *:443 # # PJTNAME.COM and subdomains SETTING # <VirtualHost *:443> # Because this virtual host is defined first, it will # be used as the default if the hostname is not received # in the SSL handshake, e.g. if the browser doesn't support # SNI. ServerName pjtname.com:443 DocumentRoot "/Users/<my_user_name>/Sites/pjtname.com/pjtname.com/public" ServerAdmin [email protected] ErrorLog "/private/var/log/apache2/error_log" TransferLog "/private/var/log/apache2/access_log" RackEnv development <Directory "/Users/<my_user_name>/Sites/pjtname.com/pjtname.com/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on # Self Signed certificates # Server Certificate SSLCertificateFile /private/etc/apache2/ssl/wildcard.certificate/pjtname.crt # Server Private Key SSLCertificateKeyFile /private/etc/apache2/ssl/wildcard.certificate/pjtname.key # Server Intermediate Bundle SSLCertificateChainFile /private/etc/apache2/ssl/wildcard.certificate/ca.crt </VirtualHost> # HTTP Setting <VirtualHost *:80> ServerName pjtname.com DocumentRoot "/Users/<my_user_name>/Sites/pjtname.com/pjtname.com/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/pjtname.com/pjtname.com/public"> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:443> ServerName users.pjtname.com:443 DocumentRoot "/Users/<my_user_name>/Sites/pjtname.com/users.pjtname.com/public" ServerAdmin [email protected] ErrorLog "/private/var/log/apache2/error_log" TransferLog "/private/var/log/apache2/access_log" RackEnv development <Directory "/Users/<my_user_name>/Sites/pjtname.com/users.pjtname.com/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on # Self Signed certificates # Server Certificate SSLCertificateFile /private/etc/apache2/ssl/wildcard.certificate/pjtname.crt # Server Private Key SSLCertificateKeyFile /private/etc/apache2/ssl/wildcard.certificate/pjtname.key # Server Intermediate Bundle SSLCertificateChainFile /private/etc/apache2/ssl/wildcard.certificate/ca.crt </VirtualHost> # HTTP Setting <VirtualHost *:80> ServerName users.pjtname.com DocumentRoot "/Users/<my_user_name>/Sites/pjtname.com/users.pjtname.com/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/pjtname.com/users.pjtname.com/public"> Order allow,deny Allow from all </Directory> </VirtualHost> In the host file I have: ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost # PJTNAME.COM SETTING 127.0.0.1 pjtname.com 127.0.0.1 users.pjtname.com All seems to work properly because I have already set everything (I think correctly): I generated a wildcard certificate for my domains and sub-domains (in this example: *.pjtname.com) I have set base-named virtualhosts in the http.conf file listening on port :433 and :80 My browser accept certificates also if it alerts me that those aren't safe (notice: I must accept certificates for each domain\sub-domain; that is, [only] at the first time I access a domain or sub-domain over HTTPS I must do the same procedure for acceptance) and I can have access to pages using HTTPS After all this work, when I make a request using Typhoeus (I can use also the Ruby Net::Http library and the result doesn't change) from the pjtname.com RoR application: # Typhoeus request Typhoeus::Request.get("https://users.pjtname.com/") I get something like a warning about the certificate: --- &id001 !ruby/object:Typhoeus::Response app_connect_time: 0.0 body: "" code: 0 connect_time: 0.000625 # Here is the warning curl_error_message: Peer certificate cannot be authenticated with known CA certificates curl_return_code: 60 effective_url: https://users.pjtname.com/ headers: "" http_version: mock: false name_lookup_time: 0.000513 pretransfer_time: 0.0 request: !ruby/object:Typhoeus::Request after_complete: auth_method: body: ... All this means that something is wrong. So, what I have to do to avoid the "Peer certificate cannot be authenticated with known CA certificates" warning and make the HTTPS request to work? Where is\are the error\errors (I think in the Apache configuration, but where?!)? P.S.: if you need some more info, let me know.

    Read the article

  • Sendmail to local domain ignoring MX records (part 2)

    - by FractalizeR
    Hello. I have the exact problem, like in this post: http://serverfault.com/questions/25068/sendmail-to-local-domain-ignoring-mx-records I am also using email provider like GMail For Your Domain (which stores your mail and manages it). I am sending mail from my server directly, but receiving mail is done via Yandex (email provider). Since the server hosts forum, I prefer to send mail directly from it because using another mail provider can slow things. Also, when I send 300.000 emails to my subscribers, email provider will surely block me thinking I send spam. My DNS zone now is: ; ; GSMFORUM.RU ; $TTL 1H gsmforum.ru. SOA ns1.hc.ru. support.hc.ru. ( 2009122268 ; Serial 1H ; Refresh 30M ; Retry 1W ; Expire 1H ) ; Minimum gsmforum.ru. NS ns1.hc.ru. gsmforum.ru. NS ns2.hc.ru. @ A 79.174.68.223 *.gsmforum.ru. CNAME @ ns1 A 79.174.68.223 ns2 A 79.174.68.224 @ MX 10 mx.yandex.ru. mail CNAME domain.mail.yandex.net. yamail-xxxxxxxxx CNAME mail.yandex.ru. Server hostname is server.gsmforum.ru. May be this is the cause? Can someone explain the reason of the matter (the rules that make sendmail consider domain to be local)? Can I easily change *.gsmforum.ru. CNAME @ into *.gsmforum.ru. A 79.174.68.224 to solve this problem? [root@server ~]# cat /etc/mail/local-host-names localhost localhost.localdomain This server hosts gsmforum.ru so I cannot put it into another domain like David Mackintosh suggests. Putting domain in mailertable doesn't solve the problem also. sendmail -bt still shows, that address is local. DontProbeInterfaces is also set to true at sendmail config. M4 file follows: divert(-1)dnl dnl # dnl # This is the sendmail macro config file for m4. If you make changes to dnl # /etc/mail/sendmail.mc, you will need to regenerate the dnl # /etc/mail/sendmail.cf file by confirming that the sendmail-cf package is dnl # installed and then performing a dnl # dnl # make -C /etc/mail dnl # include(`/usr/share/sendmail-cf/m4/cf.m4')dnl VERSIONID(`setup for linux')dnl OSTYPE(`linux')dnl dnl # dnl # Do not advertize sendmail version. dnl # dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl dnl # dnl # default logging level is 9, you might want to set it higher to dnl # debug the configuration dnl # dnl define(`confLOG_LEVEL', `9')dnl dnl # dnl # Uncomment and edit the following line if your outgoing mail needs to dnl # be sent out through an external mail server: dnl # dnl define(`SMART_HOST', `smtp.your.provider')dnl dnl # define(`confDEF_USER_ID', ``8:12'')dnl dnl define(`confAUTO_REBUILD')dnl define(`confTO_CONNECT', `1m')dnl define(`confTRY_NULL_MX_LIST', `True')dnl define(`confDONT_PROBE_INTERFACES',`True') define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl define(`ALIAS_FILE', `/etc/aliases')dnl define(`STATUS_FILE', `/var/log/mail/statistics')dnl define(`UUCP_MAILER_MAX', `2000000')dnl define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl define(`confAUTH_OPTIONS', `A')dnl dnl # dnl # The following allows relaying if the user authenticates, and disallows dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links dnl # dnl define(`confAUTH_OPTIONS', `A p')dnl dnl # dnl # PLAIN is the preferred plaintext authentication method and used by dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do dnl # use LOGIN. Other mechanisms should be used if the connection is not dnl # guaranteed secure. dnl # Please remember that saslauthd needs to be running for AUTH. dnl # dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl # dnl # Rudimentary information on creating certificates for sendmail TLS: dnl # cd /usr/share/ssl/certs; make sendmail.pem dnl # Complete usage: dnl # make -C /usr/share/ssl/certs usage dnl # dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl dnl # dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's dnl # slapd, which requires the file to be readble by group ldap dnl # dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl dnl # dnl define(`confTO_QUEUEWARN', `4h')dnl dnl define(`confTO_QUEUERETURN', `5d')dnl dnl define(`confQUEUE_LA', `12')dnl dnl define(`confREFUSE_LA', `18')dnl define(`confTO_IDENT', `0')dnl dnl FEATURE(delay_checks)dnl FEATURE(`no_default_msa', `dnl')dnl FEATURE(`smrsh', `/usr/sbin/smrsh')dnl FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl FEATURE(redirect)dnl FEATURE(always_add_domain)dnl FEATURE(use_cw_file)dnl FEATURE(use_ct_file)dnl dnl # dnl # The following limits the number of processes sendmail can fork to accept dnl # incoming messages or process its message queues to 20.) sendmail refuses dnl # to accept connections once it has reached its quota of child processes. dnl # dnl define(`confMAX_DAEMON_CHILDREN', `20')dnl dnl # dnl # Limits the number of new connections per second. This caps the overhead dnl # incurred due to forking new sendmail processes. May be useful against dnl # DoS attacks or barrages of spam. (As mentioned below, a per-IP address dnl # limit would be useful but is not available as an option at this writing.) dnl # dnl define(`confCONNECTION_RATE_THROTTLE', `3')dnl dnl # dnl # The -t option will retry delivery if e.g. the user runs over his quota. dnl # FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl FEATURE(`blacklist_recipients')dnl EXPOSED_USER(`root')dnl dnl # dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery uncomment dnl # the following 2 definitions and activate below in the MAILER section the dnl # cyrusv2 mailer. dnl # dnl define(`confLOCAL_MAILER', `cyrusv2')dnl dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl dnl # dnl # The following causes sendmail to only listen on the IPv4 loopback address dnl # 127.0.0.1 and not on any other network devices. Remove the loopback dnl # address restriction to accept email from the internet or intranet. dnl # DAEMON_OPTIONS(`Name=MTA,Port=smtp') dnl # dnl # The following causes sendmail to additionally listen to port 587 for dnl # mail from MUAs that authenticate. Roaming users who can't reach their dnl # preferred sendmail daemon due to port 25 being blocked or redirected find dnl # this useful. dnl # dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl dnl # dnl # The following causes sendmail to additionally listen to port 465, but dnl # starting immediately in TLS mode upon connecting. Port 25 or 587 followed dnl # by STARTTLS is preferred, but roaming clients using Outlook Express can't dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1. dnl # dnl # For this to work your OpenSSL certificates must be configured. dnl # dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl dnl # dnl # The following causes sendmail to additionally listen on the IPv6 loopback dnl # device. Remove the loopback address restriction listen to the network. dnl # dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl dnl # dnl # enable both ipv6 and ipv4 in sendmail: dnl # dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6') dnl # dnl # We strongly recommend not accepting unresolvable domains if you want to dnl # protect yourself from spam. However, the laptop and users on computers dnl # that do not have 24x7 DNS do need this. dnl # FEATURE(`accept_unresolvable_domains')dnl dnl # dnl FEATURE(`relay_based_on_MX')dnl dnl # dnl # Also accept email sent to "localhost.localdomain" as local email. dnl # LOCAL_DOMAIN(`localhost.localdomain')dnl dnl # dnl # The following example makes mail from this host and any additional dnl # specified domains appear to be sent from mydomain.com dnl # dnl MASQUERADE_AS(`mydomain.com')dnl dnl # dnl # masquerade not just the headers, but the envelope as well dnl # dnl FEATURE(masquerade_envelope)dnl dnl # dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as well dnl # dnl FEATURE(masquerade_entire_domain)dnl dnl # dnl MASQUERADE_DOMAIN(localhost)dnl dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl dnl MASQUERADE_DOMAIN(mydomain.lan)dnl MAILER(smtp)dnl MAILER(procmail)dnl dnl MAILER(cyrusv2)dnl FEATURE(`dnsbl',`zen.spamhaus.org',`Rejected - your IP is blacklisted by http://www.spamhaus.org')

    Read the article

  • Tomcat IIS 7 Integration gives 503 errors on all requests.

    - by Yvan JANSSENS
    Hi, After many attempts to install Tomcat on IIS 7, I finally managed to get it working. At least I think so :-S. I finally got the 500 errors away, by setting the correct permissions. The only thing that doesn't work is ... serving stuff: neither regular stuff (like ASP, HTML files, or directory browsing) or Tomcat things work. Here are my configs: Worker.properties # The workers that your plugins should create and work with # worker.list=worker1 #------ DEFAULT ajp13 WORKER DEFINITION ------------------------------ #--------------------------------------------------------------------- # Defining a worker named ajp13 and of type ajp13 # Note that the name and the type do not have to match. # worker.worker1.port=8009 worker.worker1.host=127.0.0.1 worker.worker1.type=ajp13 URIWorkerMap.properties /|/*=worker1 # Exclude the subdirectory static: !/static|/*=worker1 # Exclude some suffixes: !*.html=worker1 !*.asp=worker1 Server.xml <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> http://localhost:8080 is working, I can view the apps and configure them there... I'm quite new to IIS 7, I used to work with IIS 6. Thanks in advance, Yvan

    Read the article

  • Failed to convert a wmv file to mp4 with ffmpeg

    - by Olaf Erlandsen
    i need a help with this command FFMPEG COMMAND: ffmpeg -y -i /input.wmv -vcodec libx264 -acodec libfaac -ac 2 -bufsize 20M -sameq -f mp4 /output.mp4 Output: ffmpeg version 1.0 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 9 2012 07:04:08 with gcc 4.4.6 (GCC) 20120305 (Red Hat 4.4.6-4) [wmv3 @ 0x16a4800] Extra data: 8 bits left, value: 0 Guessed Channel Layout for Input Stream #0.0 : stereo Input #0, asf, from '/input.wmv': Metadata: WMFSDKVersion : 11.0.5721.5275 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 Duration: 00:01:35.10, start: 0.000000, bitrate: 496 kb/s Stream #0:0(spa): Audio: wmav2 (a[1][0][0] / 0x0161), 44100 Hz, stereo, s16, 64 kb/s Stream #0:1(spa): Video: wmv3 (Main) (WMV3 / 0x33564D57), yuv420p, 320x240, 425 kb/s, SAR 1:1 DAR 4:3, 29.97 tbr, 1k tbn, 1k tbc [libx264 @ 0x16c3000] VBV bufsize set but maxrate unspecified, ignored [libx264 @ 0x16c3000] using SAR=1/1 [libx264 @ 0x16c3000] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x16c3000] profile High, level 1.3 [libx264 @ 0x16c3000] 264 - core 128 - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 [wmv3 @ 0x16a4800] Extra data: 8 bits left, value: 0 Output #0, mp4, to '/output.mp4': Metadata: WMFSDKVersion : 11.0.5721.5275 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 encoder : Lavf54.29.104 Stream #0:0(spa): Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 320x240 [SAR 1:1 DAR 4:3], q=-1--1, 30k tbn, 29.97 tbc Stream #0:1(spa): Audio: aac ([64][0][0][0] / 0x0040), 44100 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0:1 -> #0:0 (wmv3 -> libx264) Stream #0:0 -> #0:1 (wmav2 -> libfaac) Press [q] to stop, [?] for help [libfaac @ 0x16b3600] Que input is backward in time [mp4 @ 0x16bb3a0] st:0 PTS: 6174 DTS: 6174 < 7169 invalid, clipping frame= 144 fps=0.0 q=29.0 size= 207kB time=00:00:03.38 bitrate= 500.3kbits/s frame= 259 fps=257 q=29.0 size= 447kB time=00:00:07.30 bitrate= 501.3kbits/s frame= 375 fps=248 q=29.0 size= 668kB time=00:00:11.01 bitrate= 496.5kbits/s frame= 487 fps=241 q=29.0 size= 836kB time=00:00:14.85 bitrate= 460.7kbits/s frame= 605 fps=240 q=29.0 size= 1080kB time=00:00:18.92 bitrate= 467.4kbits/s frame= 719 fps=238 q=29.0 size= 1306kB time=00:00:22.80 bitrate= 469.2kbits/s frame= 834 fps=237 q=29.0 size= 1546kB time=00:00:26.52 bitrate= 477.3kbits/s frame= 953 fps=237 q=29.0 size= 1763kB time=00:00:30.27 bitrate= 477.0kbits/s frame= 1071 fps=237 q=29.0 size= 1986kB time=00:00:34.36 bitrate= 473.4kbits/s frame= 1161 fps=231 q=29.0 size= 2160kB time=00:00:37.21 bitrate= 475.4kbits/s frame= 1221 fps=220 q=29.0 size= 2282kB time=00:00:39.53 bitrate= 472.9kbits/s frame= 1280 fps=212 q=29.0 size= 2392kB time=00:00:41.16 bitrate= 476.1kbits/s frame= 1331 fps=203 q=29.0 size= 2502kB time=00:00:43.23 bitrate= 474.1kbits/s frame= 1379 fps=195 q=29.0 size= 2618kB time=00:00:44.72 bitrate= 479.6kbits/s frame= 1430 fps=189 q=29.0 size= 2733kB time=00:00:46.34 bitrate= 483.0kbits/s frame= 1487 fps=184 q=29.0 size= 2851kB time=00:00:48.40 bitrate= 482.6kbits/s frame= 1546 fps=180 q=26.0 size= 2973kB time=00:00:50.43 bitrate= 482.9kbits/s frame= 1610 fps=177 q=29.0 size= 3112kB time=00:00:52.40 bitrate= 486.5kbits/s frame= 1672 fps=174 q=29.0 size= 3231kB time=00:00:54.35 bitrate= 487.0kbits/s frame= 1733 fps=171 q=29.0 size= 3348kB time=00:00:56.51 bitrate= 485.3kbits/s frame= 1792 fps=169 q=29.0 size= 3459kB time=00:00:58.28 bitrate= 486.2kbits/s frame= 1851 fps=166 q=29.0 size= 3588kB time=00:01:00.32 bitrate= 487.2kbits/s frame= 1910 fps=164 q=29.0 size= 3716kB time=00:01:02.36 bitrate= 488.1kbits/s frame= 1972 fps=162 q=29.0 size= 3833kB time=00:01:04.45 bitrate= 487.1kbits/s frame= 2032 fps=161 q=29.0 size= 3946kB time=00:01:06.40 bitrate= 486.8kbits/s frame= 2091 fps=159 q=29.0 size= 4080kB time=00:01:08.35 bitrate= 488.9kbits/s frame= 2150 fps=158 q=29.0 size= 4201kB time=00:01:10.54 bitrate= 487.9kbits/s frame= 2206 fps=156 q=29.0 size= 4315kB time=00:01:12.39 bitrate= 488.3kbits/s frame= 2263 fps=154 q=29.0 size= 4438kB time=00:01:14.21 bitrate= 489.9kbits/s frame= 2327 fps=154 q=29.0 size= 4567kB time=00:01:16.16 bitrate= 491.2kbits/s frame= 2388 fps=152 q=29.0 size= 4666kB time=00:01:18.48 bitrate= 487.0kbits/s frame= 2450 fps=152 q=29.0 size= 4776kB time=00:01:20.24 bitrate= 487.6kbits/s frame= 2511 fps=151 q=29.0 size= 4890kB time=00:01:22.15 bitrate= 487.6kbits/s frame= 2575 fps=150 q=29.0 size= 5015kB time=00:01:24.42 bitrate= 486.6kbits/s frame= 2635 fps=149 q=29.0 size= 5130kB time=00:01:26.62 bitrate= 485.2kbits/s frame= 2695 fps=148 q=29.0 size= 5258kB time=00:01:28.65 bitrate= 485.9kbits/s frame= 2758 fps=147 q=29.0 size= 5382kB time=00:01:30.64 bitrate= 486.4kbits/s frame= 2816 fps=147 q=29.0 size= 5521kB time=00:01:32.69 bitrate= 487.9kbits/s get_buffer() failed Error while decoding stream #0:0: Invalid argument frame= 2848 fps=143 q=-1.0 Lsize= 5787kB time=00:01:35.10 bitrate= 498.4kbits/s video:5099kB audio:581kB subtitle:0 global headers:0kB muxing overhead 1.884230% [libx264 @ 0x16c3000] frame I:12 Avg QP:22.64 size: 12092 [libx264 @ 0x16c3000] frame P:1508 Avg QP:25.39 size: 2933 [libx264 @ 0x16c3000] frame B:1328 Avg QP:30.62 size: 491 [libx264 @ 0x16c3000] consecutive B-frames: 10.0% 80.8% 8.1% 1.1% [libx264 @ 0x16c3000] mb I I16..4: 1.8% 72.1% 26.0% [libx264 @ 0x16c3000] mb P I16..4: 0.4% 2.4% 0.3% P16..4: 48.3% 19.6% 19.3% 0.0% 0.0% skip: 9.5% [libx264 @ 0x16c3000] mb B I16..4: 0.1% 0.2% 0.0% B16..8: 52.6% 6.6% 2.3% direct: 1.4% skip:36.8% L0:48.8% L1:42.5% BI: 8.7% [libx264 @ 0x16c3000] 8x8 transform intra:75.3% inter:55.4% [libx264 @ 0x16c3000] coded y,uvDC,uvAC intra: 77.9% 81.7% 33.1% inter: 24.2% 11.6% 1.1% [libx264 @ 0x16c3000] i16 v,h,dc,p: 25% 16% 44% 14% [libx264 @ 0x16c3000] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 15% 29% 6% 5% 6% 6% 7% 7% [libx264 @ 0x16c3000] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 15% 17% 7% 9% 8% 9% 7% 7% [libx264 @ 0x16c3000] i8c dc,h,v,p: 50% 19% 24% 7% [libx264 @ 0x16c3000] Weighted P-Frames: Y:3.8% UV:1.1% [libx264 @ 0x16c3000] ref P L0: 75.6% 19.1% 4.2% 1.0% 0.1% [libx264 @ 0x16c3000] ref B L0: 98.1% 1.9% 0.0% [libx264 @ 0x16c3000] ref B L1: 98.9% 1.1% [libx264 @ 0x16c3000] kb/s:439.47 FFMPEG Configuration: --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvpx --enable-libfaac --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-gpl --enable-postproc --enable-nonfree libavutil 51. 73.101 / 51. 73.101 libavcodec 54. 59.100 / 54. 59.100 libavformat 54. 29.104 / 54. 29.104 libavdevice 54. 2.101 / 54. 2.101 libavfilter 3. 17.100 / 3. 17.100 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 PROBLEM #1: [libfaac @ 0x16b3600] Que input is backward in time [mp4 @ 0x16bb3a0] st:0 PTS: 6174 DTS: 6174 < 7169 invalid, clipping PROBLEM #2: get_buffer() failed Error while decoding stream #0:0: Invalid argument

    Read the article

  • How to add Category in DotClear blog with HttpWebRequest or MetaWeblog API

    - by Pitming
    I'm trying to create/modify dotclear blogs. For most of the options, i use XmlRpc API (DotClear.MetaWeblog). But didn't find any way to handle categories. So I start to look at the Http packet and try to do "the same as the browser". Here si the method I use to "Http POST" protected HttpStatusCode HttpPost(Uri url_, string data_, bool allowAutoRedirect_) { HttpWebRequest Request; HttpWebResponse Response = null; Stream ResponseStream = null; Request = (System.Net.HttpWebRequest)HttpWebRequest.Create(url_); Request.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 5.1; fr; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 (.NET CLR 3.5.30729)"; Request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; Request.AllowAutoRedirect = allowAutoRedirect_; // Add the network credentials to the request. Request.Credentials = new NetworkCredential(Username, Password); string authInfo = Username + ":" + Password; authInfo = Convert.ToBase64String(Encoding.Default.GetBytes(authInfo)); Request.Headers["Authorization"] = "Basic " + authInfo; Request.Method = "POST"; Request.CookieContainer = Cookies; if(ConnectionCookie!=null) Request.CookieContainer.Add(url_, ConnectionCookie); if (dcAdminCookie != null) Request.CookieContainer.Add(url_, dcAdminCookie); Request.PreAuthenticate = true; ASCIIEncoding encoding = new ASCIIEncoding(); string postData = data_; byte[] data = encoding.GetBytes(postData); //Encoding.UTF8.GetBytes(data_); //encoding.GetBytes(postData); Request.ContentLength = data.Length; Request.ContentType = "application/x-www-form-urlencoded"; Stream newStream = Request.GetRequestStream(); // Send the data. newStream.Write(data, 0, data.Length); newStream.Close(); try { // get the response from the server. Response = (HttpWebResponse)Request.GetResponse(); if (!allowAutoRedirect_) { foreach (Cookie c in Response.Cookies) { if (c.Name == "dcxd") ConnectionCookie = c; if (c.Name == "dc_admin") dcAdminCookie = c; } Cookies.Add(Response.Cookies); } // Get the response stream. ResponseStream = Response.GetResponseStream(); // Pipes the stream to a higher level stream reader with the required encoding format. StreamReader readStream = new StreamReader(ResponseStream, Encoding.UTF8); string result = readStream.ReadToEnd(); if (Request.RequestUri == Response.ResponseUri) { _log.InfoFormat("{0} ==&gt; {1}({2})", Request.RequestUri, Response.StatusCode, Response.StatusDescription); } else { _log.WarnFormat("RequestUri:{0}\r\nResponseUri:{1}\r\nstatus code:{2} Status descr:{3}", Request.RequestUri, Response.ResponseUri, Response.StatusCode, Response.StatusDescription); } } catch (WebException wex) { Response = wex.Response as HttpWebResponse; if (Response != null) { _log.ErrorFormat("{0} ==&gt; {1}({2})", Request.RequestUri, Response.StatusCode, Response.StatusDescription); } Request.Abort(); } finally { if (Response != null) { // Releases the resources of the response. Response.Close(); } } if(Response !=null) return Response.StatusCode; return HttpStatusCode.Ambiguous; } So the first thing to do is to Authenticate as admin. Here is the code: protected bool HttpAuthenticate() { Uri u = new Uri(this.Url); Uri url = new Uri(string.Format("{0}/admin/auth.php", u.GetLeftPart(UriPartial.Authority))); string data = string.Format("user_id={0}&user_pwd={1}&user_remember=1", Username, Password); var ret = HttpPost(url,data,false); return (ret == HttpStatusCode.OK || ret==HttpStatusCode.Found); } 3.Now that I'm authenticate, i need to get a xd_chek info (that i can find on the page so basically it's a GET on /admin/category.php + Regex("dotclear[.]nonce = '(.*)'")) 4.so I'm authenticate and have the xd_check info. The last thing to do seems to post the next category. But of course it does not work at all... here is the code: string postData = string.Format("cat_title={0}&new_cat_parent={1}&xd_check={2}", category_, 0, xdCheck); HttpPost(url, postData, true); If anyone can help me and explain were is it wrong ? thanks in advance.

    Read the article

  • Use DivX settings to encode to mp4 with ffmpeg

    - by sjngm
    I'm used to use VirtualDub to encode a video to AVI container with DivX-codec (and MP3 for audio). Now I'm planning to use ffmpeg to encode videos to MP4 container with h264-codec. What I've figured out is that I need to use libx264 and one of those presets to make anything work. However, I'm amazed about the video bitrate ffmpeg uses for encoding. What I currently have is this little batch file: @ECHO OFF SETLOCAL SET IN=source.avs SET FFMPEG_PATH=C:\Program Files (x86)\ffmpeg SET PRESET=-fpre "%FFMPEG_PATH%\presets\libx264-lossless_slow.ffpreset" SET AUDIO=-acodec libmp3lame -ab 128000 SET VIDEO=-vcodec libx264 -vb 1978000 "%FFMPEG_PATH%\ffmpeg.exe" -i %IN% %AUDIO% %VIDEO% %PRESET% test.mp4 ENDLOCAL With this I tell ffmpeg to use 1978k as the bitrate, but ffmpeg uses 15000k+! I tried other presets, but they don't use my specified bitrate. Here are the presets I have: libx264-baseline.ffpreset libx264-ipod320.ffpreset libx264-ipod640.ffpreset libx264-lossless_fast.ffpreset libx264-lossless_max.ffpreset libx264-lossless_medium.ffpreset libx264-lossless_slow.ffpreset libx264-lossless_slower.ffpreset libx264-lossless_ultrafast.ffpreset ffmpeg version: FFmpeg git-N-29181-ga304071 libavutil 50. 40. 1 / 50. 40. 1 libavcodec 52.120. 0 / 52.120. 0 libavformat 52.108. 0 / 52.108. 0 libavdevice 52. 4. 0 / 52. 4. 0 libavfilter 1. 79. 0 / 1. 79. 0 libswscale 0. 13. 0 / 0. 13. 0 Note that I don't use the latest version as it has problems with spaces in filenames. Here's what seems to be the full parameter list DivX 6.9.2 uses: -bvnn 1978000 -vbv 218691200,100663296,100663296 -dir "C:\Users\sjngm\AppData\Roaming\DivX\DivX Codec" -w -b 1 -use_presets=1 -preset=10 -windowed_fullsearch=2 -thread_delay=1 What command line parameters would that be for ffmpeg? EDIT: Going with slhck's suggestion I tried a new 32-bit version. I have no idea if that is 0.9 or newer, I can't find that info. ffmpeg version N-36890-g67f5650 libavutil 51. 34.100 / 51. 34.100 libavcodec 53. 56.105 / 53. 56.105 libavformat 53. 30.100 / 53. 30.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 59.100 / 2. 59.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 51. 2.100 / 51. 2.100 I reworked my batch file to look like this (interestingly enough I can't find parameter -vprofile in the documentation): @ECHO OFF SETLOCAL SET IN=VTS_01_1.avs SET FFMPEG_PATH=C:\Program Files (x86)\ffmpeg SET PRESET=-vprofile high -preset veryslow SET AUDIO=-acodec libmp3lame -ab 128000 SET VIDEO=-vcodec libx264 -vb 1978000 "%FFMPEG_PATH%\ffmpeg.exe" -i %IN% %AUDIO% %PRESET% %VIDEO% test.mp4 ENDLOCAL I see that it now uses the bitrate properly (thanks to LongNeckbeard for pointing out that the lossless-stuff ignores the bitrate!). Just in case you wonder how I came up with the 1978000, I'm using this formula which I found valid for DivX-files (I'm guessing the bitrate won't change that much for h264): width * height * 25 * 0.22 / 1000 I'm not sure if the 0.22 correlates with the CRF somehow. Overall I forgot to say the I will use a two-pass scenario, which is why I don't use the CRF here. I will try to read more about this. Currently I'm just trying to get something running that shows me that I'm doing something right (ffmpeg isn't the easiest tool to understand ;)). C:\Program Files (x86)\ffmpeg\ffmpeg.exe" -i VTS_01_1.avs -acodec libmp3lame -ab 128000 -vcodec libx264 -vb 1978000 -vprofile high -preset veryslow test.mp4 The output is now: ffmpeg version N-36890-g67f5650 Copyright (c) 2000-2012 the FFmpeg developers built on Jan 16 2012 21:57:13 with gcc 4.6.2 configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib libavutil 51. 34.100 / 51. 34.100 libavcodec 53. 56.105 / 53. 56.105 libavformat 53. 30.100 / 53. 30.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 59.100 / 2. 59.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 51. 2.100 / 51. 2.100 Input #0, avs, from 'VTS_01_1.avs': Duration: 00:58:46.12, start: 0.000000, bitrate: 0 kb/s Stream #0:0: Video: rawvideo (YV12 / 0x32315659), yuv420p, 576x448, 77414 kb/s, 25 tbr, 25 tbn, 25 tbc Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, 2 channels, s16, 1536 kb/s File 'test.mp4' already exists. Overwrite ? [y/N] y w:576 h:448 pixfmt:yuv420p tb:1/1000000 sar:0/1 sws_param: [libx264 @ 05A2C400] using cpu capabilities: MMX2 SSE2Fast FastShuffle SSEMisalign LZCNT [libx264 @ 05A2C400] profile High, level 3.1 [libx264 @ 05A2C400] 264 - core 120 r2120 0c7dab9 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=16 deblock=1:0:0 analyse=0x3:0x133 me=umh subme=10 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=24 chroma_me=1 trellis=2 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=8 b_pyramid=2 b_adapt=2 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=60 rc=abr mbtree=1 bitrate=1978 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'test.mp4': Metadata: encoder : Lavf53.30.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuv420p, 576x448, q=-1--1, 1978 kb/s, 25 tbn, 25 tbc Stream #0:1: Audio: mp3 (i[0][0][0] / 0x0069), 48000 Hz, 2 channels, s16, 128 kb/s Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Stream #0:1 -> #0:1 (pcm_s16le -> libmp3lame) Press [q] to stop, [?] for help frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 3 fps= 1 q=22.0 size= 39kB time=00:00:00.04 bitrate=8063.8kbits/ frame= 8 fps= 2 q=22.0 size= 82kB time=00:00:00.24 bitrate=2801.3kbits/ frame= 13 fps= 3 q=23.0 size= 120kB time=00:00:00.44 bitrate=2229.5kbits/ frame= 16 fps= 4 q=23.0 size= 147kB time=00:00:00.56 bitrate=2156.7kbits/ frame= 20 fps= 4 q=22.0 size= 175kB time=00:00:00.72 bitrate=1987.4kbits/ : video:4387kB audio:273kB global headers:0kB muxing overhead 0.260038% [libx264 @ 05A2C400] frame I:2 Avg QP:19.53 size: 29850 [libx264 @ 05A2C400] frame P:76 Avg QP:22.24 size: 19541 [libx264 @ 05A2C400] frame B:359 Avg QP:25.93 size: 8210 [libx264 @ 05A2C400] consecutive B-frames: 0.5% 0.5% 0.0% 8.2% 17.2% 52.2% 16.0% 5.5% 0.0% [libx264 @ 05A2C400] mb I I16..4: 5.4% 75.3% 19.3% [libx264 @ 05A2C400] mb P I16..4: 1.3% 16.5% 2.2% P16..4: 36.3% 28.6% 12.7% 1.8% 0.2% skip: 0.4% [libx264 @ 05A2C400] mb B I16..4: 0.4% 3.8% 0.3% B16..8: 40.0% 18.4% 4.7% direct:18.5% skip:13.9% L0:45.4% L1:38.1% BI:16.5% [libx264 @ 05A2C400] final ratefactor: 20.35 [libx264 @ 05A2C400] 8x8 transform intra:83.1% inter:68.5% [libx264 @ 05A2C400] direct mvs spatial:99.2% temporal:0.8% [libx264 @ 05A2C400] coded y,uvDC,uvAC intra: 64.9% 83.4% 49.2% inter: 49.0% 50.4% 4.4% [libx264 @ 05A2C400] i16 v,h,dc,p: 25% 22% 27% 26% [libx264 @ 05A2C400] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 10% 7% 23% 9% 10% 10% 10%10% 13% [libx264 @ 05A2C400] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 12% 11% 13% 9% 12% 11% 10% 9% 12% [libx264 @ 05A2C400] i8c dc,h,v,p: 42% 28% 16% 14% [libx264 @ 05A2C400] Weighted P-Frames: Y:18.4% UV:7.9% [libx264 @ 05A2C400] ref P L0: 29.1% 11.3% 15.7% 7.3% 6.9% 4.9% 5.1% 3.4%3.9% 2.7% 2.8% 1.8% 1.7% 1.2% 1.4% 0.9% [libx264 @ 05A2C400] ref B L0: 68.8% 11.4% 5.5% 2.9% 2.3% 1.9% 1.5% 1.1%1.1% 1.0% 0.9% 0.7% 0.5% 0.3% 0.1% [libx264 @ 05A2C400] ref B L1: 91.9% 8.1% [libx264 @ 05A2C400] kb/s:2055.88 As far as I'm concerned it doesn't look that bad to me.

    Read the article

  • How to create a SOAP REQUEST using ASP.NET (VB) without using Visual

    - by user311691
    Hi all , I urgently need your help . I am new to consuming a web service using SOAP protocol. I have been given a demo webservice URL which ends in .WSDL and NOT .asml?WSDL. The problem is I cannot add a web reference using Visual studio OR Disco.exe or Wsdl.exe - This webservice has been created on a java platform and for security reasons the only way to make a invoke the webservice is at runtime using SOAP protocol IN asp.net (VB). I I have created some code but cannot seem to send the soap object to the receiving web service. If I could get a solution with step by step instructions on how I can send a SOAP REQUEST. Below is my code and all am trying to do is send a SOAP REQUEST and receive a SOAP RESPONSE which I will display in my browser. <%@ page language="vb" %> <%@ Import Namespace="System.Data"%> <%@ Import Namespace="System.Xml"%> <%@ Import Namespace="System.Net"%> <%@ Import Namespace="System.IO"%> <%@ Import Namespace="System.Text"%> <script runat=server> Private Sub Page_Load() Dim objHTTPReq As HttpWebRequest Dim WebserviceUrl As String = "http://xx.xx.xx:8084/asy/wsdl/asy.wsdl" objHTTPReq = CType(WebRequest.Create(WebserviceUrl), HttpWebRequest) Dim soapXML As String soapXML = "<?xml version='1.0' encoding='utf-8'?>" & _ " <soap:Envelope xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'" & _ " xmlns:xsd='http://www.w3.org/2001/XMLSchema'"& _ " xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' >"& _ " <soap:Body> "& _ " <validatePaymentData xmlns='http://asybanks.webservices.asycuda.org'> " & _ " <bankCode>"& bankCode &"</bankCode> " & _ " <PaymentDataType>" & _ " <paymentType>"& payment_type &"</paymentType> " & _ " <amount>"& ass_amount &"</amount> " & _ " <ReferenceType>" & _ " <year>"& year &"</year> " & _ " <customsOfficeCode>"& station &"</customsOfficeCode> " & _ " </ReferenceType>" & _ " <accountNumber>"& zra_account &"</accountNumber> " & _ " </PaymentDataType> " & _ " </validatePaymentData> " & _ " </soap:Body> " & _ " </soap:Envelope> " objHTTPReq.Headers.Add("SOAPAction", "http://asybanks.webservices.asycuda.org") objHTTPReq.ContentType = "text/xml; charset=utf-8" objHTTPReq.ContentLength = soapXML.Length objHTTPReq.Accept = "text/xml" objHTTPReq.Method = "POST" Dim objHTTPRes As HttpWebResponse = CType(objHTTPReq.GetResponse(), HttpWebResponse) Dim dataStream As Stream = objHTTPRes.GetResponseStream() Dim reader As StreamReader = new StreamReader(dataStream) Dim responseFromServer As String = reader.ReadToEnd() OurXml.text = responseFromServer End Sub </script> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title> XML TRANSACTION SIMULATION - N@W@ TJ </title> </head> <body> <form id="form1" runat="server"> <div> <p>ZRA test Feedback:</p> <asp:label id="OurXml" runat="server"/> </div> </form> </body> </html> the demo webservice looks like this: <?xml version="1.0" encoding="UTF-8" ?> - <!-- WEB SERVICE JAVA DEMO --> - <definitions targetNamespace="http://asybanks.webservices.asycuda.org" xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:apachesoap="http://xml.apache.org/xml-soap" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:y="http://asybanks.webservices.asycuda.org"> - <types> - <xs:schema elementFormDefault="qualified" targetNamespace="http://asybanks.webservices.asycuda.org" xmlns="http://www.w3.org/2001/XMLSchema"> SOME OTHER INFORMATION AT THE BOTTOM <soap:address location="http://xx.xx.xx:8084/asy/services/asy" /> </port> </service> </definitions> From the above excerpt of the wsdl url webservice, I am not sure which namespace to use for soapACTION - please advise.... Please if you could comment every stage of a soap request and provide a working demo - I would be most grateful as I would be learning rather than just assuming stuff :)

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • When spliting MP4s with ffmpeg how do I include metadata?

    - by Josh
    I have a few MP4s that i want to upload to my flickr account but they have a maximum size of 500mb as mine is only about 550 i was planing to simply split them in half then upload them, but i want to make sure all the meta data is included but it does not seem to be. I have tried each of the following with no luck, (at the end of this post i have the original and the new ffprobe outputs): ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_meta_data SANY0069.MP4:SANY0069A.MP4 SANY0069A.MP4 with the this one I manually produced the individual meta tags that i took from this command ffmpeg -i SANY0069A.MP4 -f ffmetadata meta.txt ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -metadata major_brand="mp42" -metadata minor_version="1" -metadata compatible_brands="mp42avc1" -metadata creation_time="2012-09-29 09:05:50" -metadata comment="SANYO DIGITAL CAMERA CA9" -metadata comment-eng="SANYO DIGITAL CAMERA CA9" SANY0069A.MP4 using the output of the former command i also tried this: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -f ffmetadata -i meta.txt SANY0069A.MP4 Output: sample output from my first command: ffmpeg -ss 00:00:00.00 -t 00:04:19.35 -i SANY0069.MP4 -acodec copy -vcodec copy -map_metadata 0:0 SANY0069A.MP4 ffmpeg version 0.8.12, Copyright (c) 2000-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 File 'SANY0069A.MP4' already exists. Overwrite ? [y/N] y Output #0, mp4, to 'SANY0069A.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 encoder : Lavf53.5.0 Stream #0.0(eng): Video: libx264, yuv420p, 1280x720 [PAR 1:1 DAR 16:9], q=2-31, 9007 kb/s, 30k tbn, 29.97 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help frame= 7773 fps=4644 q=-1.0 Lsize= 289607kB time=00:04:19.35 bitrate=9147.4kbits/s video:285416kB audio:4033kB global headers:0kB muxing overhead 0.054571% and finaly, when i compare the ffprobe of the original and the first split part i get the 2 following outputs: original ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069.MP4': Metadata: major_brand : mp42 minor_version : 1 compatible_brands: mp42avc1 creation_time : 2012-09-29 09:05:50 comment : SANYO DIGITAL CAMERA CA9 comment-eng : SANYO DIGITAL CAMERA CA9 Duration: 00:08:38.71, start: 0.000000, bitrate: 9142 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9007 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 2012-09-29 09:05:50 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 2012-09-29 09:05:50 Split ffprobe version 0.8.12, Copyright (c) 2007-2011 the FFmpeg developers built on Jun 13 2012 09:57:38 with gcc 4.6.3 20120306 (Red Hat 4.6.3-2) configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --enable-bzlib --enable-libcelt --enable-libdc1394 --enable-libdirac --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-x11grab --enable-avfilter --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect libavutil 51. 9. 1 / 51. 9. 1 libavcodec 53. 8. 0 / 53. 8. 0 libavformat 53. 5. 0 / 53. 5. 0 libavdevice 53. 1. 1 / 53. 1. 1 libavfilter 2. 23. 0 / 2. 23. 0 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'SANY0069A.MP4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf53.5.0 comment : SANYO DIGITAL CAMERA CA9 Duration: 00:04:19.37, start: 0.000000, bitrate: 9146 kb/s Stream #0.0(eng): Video: h264 (Constrained Baseline), yuv420p, 1280x720 [PAR 1:1 DAR 16:9], 9015 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 127 kb/s Metadata: creation_time : 1970-01-01 00:00:00 I know this is incredibly long but its actually a quite simple question. I thought it would be best to provide as much detail as possible. any advice here would be great, Thanks

    Read the article

  • plupload variable upload path?

    - by SoulieBaby
    Hi all, I'm using plupload to upload files to my server (http://www.plupload.com/index.php), however I wanted to know if there was any way of making the upload path variable. Basically I need to select the upload path folder first, then choose the files using plupload and then upload to the initially selected folder. I've tried a few different ways but I can't seem to pass along the variable folder path to the upload.php file. I'm using the flash version of plupload. If someone could help me out, that would be fantastic!! :) Here's my plupload jquery: jQuery.noConflict(); jQuery(document).ready(function() { jQuery("#flash_uploader").pluploadQueue({ // General settings runtimes: 'flash', url: '/assets/upload/upload.php', max_file_size: '10mb', chunk_size: '1mb', unique_names: false, // Resize images on clientside if we can resize: {width: 500, height: 350, quality: 100}, // Flash settings flash_swf_url: '/assets/upload/flash/plupload.flash.swf' }); }); And here's the upload.php file: <?php /** * upload.php * * Copyright 2009, Moxiecode Systems AB * Released under GPL License. * * License: http://www.plupload.com/license * Contributing: http://www.plupload.com/contributing */ // HTTP headers for no cache etc header('Content-type: text/plain; charset=UTF-8'); header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); header("Last-Modified: ".gmdate("D, d M Y H:i:s")." GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); // Settings $targetDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads"; //temp directory <- need these to be variable $finalDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads2"; //final directory <- need these to be variable $cleanupTargetDir = true; // Remove old files $maxFileAge = 60 * 60; // Temp file age in seconds // 5 minutes execution time @set_time_limit(5 * 60); // usleep(5000); // Get parameters $chunk = isset($_REQUEST["chunk"]) ? $_REQUEST["chunk"] : 0; $chunks = isset($_REQUEST["chunks"]) ? $_REQUEST["chunks"] : 0; $fileName = isset($_REQUEST["name"]) ? $_REQUEST["name"] : ''; // Clean the fileName for security reasons $fileName = preg_replace('/[^\w\._]+/', '', $fileName); // Create target dir if (!file_exists($targetDir)) @mkdir($targetDir); // Remove old temp files if (is_dir($targetDir) && ($dir = opendir($targetDir))) { while (($file = readdir($dir)) !== false) { $filePath = $targetDir . DIRECTORY_SEPARATOR . $file; // Remove temp files if they are older than the max age if (preg_match('/\\.tmp$/', $file) && (filemtime($filePath) < time() - $maxFileAge)) @unlink($filePath); } closedir($dir); } else die('{"jsonrpc" : "2.0", "error" : {"code": 100, "message": "Failed to open temp directory."}, "id" : "id"}'); // Look for the content type header if (isset($_SERVER["HTTP_CONTENT_TYPE"])) $contentType = $_SERVER["HTTP_CONTENT_TYPE"]; if (isset($_SERVER["CONTENT_TYPE"])) $contentType = $_SERVER["CONTENT_TYPE"]; if (strpos($contentType, "multipart") !== false) { if (isset($_FILES['file']['tmp_name']) && is_uploaded_file($_FILES['file']['tmp_name'])) { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen($_FILES['file']['tmp_name'], "rb"); if ($in) { while ($buff = fread($in, 4096)) fwrite($out, $buff); } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); unlink($_FILES['file']['tmp_name']); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } else die('{"jsonrpc" : "2.0", "error" : {"code": 103, "message": "Failed to move uploaded file."}, "id" : "id"}'); } else { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen("php://input", "rb"); if ($in) { while ($buff = fread($in, 4096)){ fwrite($out, $buff); } } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } //Moves the file from $targetDir to $finalDir after receiving the final chunk if($chunk == ($chunks-1)){ rename($targetDir . DIRECTORY_SEPARATOR . $fileName, $finalDir . DIRECTORY_SEPARATOR . $fileName); } // Return JSON-RPC response die('{"jsonrpc" : "2.0", "result" : null, "id" : "id"}'); ?>

    Read the article

  • SSL confirmation dialog popup auto closes in IE8 when re-accessing a JNLP file

    - by haylem
    I'm having this very annoying problem to troubleshoot and have been going at it for way too many days now, so have a go at it. The Environment We have 2 app-servers, which can be located on either the same machine or 2 different machines, and use the same signing certificate, and host 2 different web-apps. Though let's say, for the sake of our study case here, that they are on the same physical machine. So, we have: https://company.com/webapp1/ https://company.com/webapp2/ webapp1 is GWT-based rich-client which contains on one of its screens a menu with an item that is used to invoke a Java WebStart Client located on webapp2. It does so by performing a simple window.open call via this GWT call: Window.open("https://company.com/webapp2/app.jnlp", "_blank", null); Expected Behavior User merrilly goes to webapp1 User navigates to menu entry to start the WebStart app and clicks on it browser fires off a separate window/dialog which, depending on the browser and its security settings, will: request confirmation to navigate to this secure site, directly download the file, and possibly auto-execute a javaws process if there's a file association, otherwise the user can simply click on the file and start the app (or go about doing whatever it takes here). If you close the app, close the dialog, and re-click the menu entry, the same thing should happen again. Actual Behavior On Anything but God-forsaken IE 8 (Though I admit there's also all the god-forsaken pre-IE8 stuff, but the Requirements Lords being merciful we have already recently managed to make them drop these suckers. That was close. Let's hold hands and say a prayer of gratitude.) Stuff just works. JNLP gets downloaded, app executes just fine, you can close the app and re-do all the steps and it will restart happily. People rejoice. Puppies are safe and play on green hills in the sunshine. Developers can go grab a coffee and move on to more meaningful and rewarding tasks, like checking out on SO questions. Chrome doesn't want to execute the JNLP, but who cares? Customers won't get RSI from clicking a file every other week. On God-forsaken IE8 On the first visit, the dialog opens and requests confirmation for the user to continue to webapp2, though it could be unsafe (here be dragons, I tell you). The JNLP downloads and auto-opens, the app start. Your breathing is steady and slow. You close the app, close that SSL confirmation dialog, and re-click the menu entry. The dialog opens and auto-closes. Nothing starts, the file wasn't downloaded to any known location and Fiddler just reports the connection was closed. If you close IE and reach that menu item to click it again, it is now back to working correctly. Until you try again during the same session, of course. Your heart-rate goes up, you get some more coffee to make matters worse, and start looking for plain tickets online and a cheap but heavy golf-club on an online auction site to go clubbing baby polar seals to avenge your bloodthirst, as the gates to the IE team in Redmond are probably more secured than an ice block, as one would assume they get death threats often. Plus, the IE9 and IE10 teams are already hard at work fxing the crap left by their predecessors, so maybe you don't want to be too hard on them, and you don't have money to waste on a PI to track down the former devs responsible for this mess. Added Details I have come across many problems with IE8 not downloading files over SSL when it uses a no-cache header. This was indeed one of our problems, which seems to be worked out now. It downloads files fine, webapp2 uses the following headers to serve the JNLP file: response.setHeader("Cache-Control", "private, must-revalidate"); // IE8 happy response.setHeader("Pragma", "private"); // IE8 happy response.setHeader("Expires", "0"); // IE8 happy response.setHeader("Access-Control-Allow-Origin", "*"); // allow to request via cross-origin AJAX response.setContentType("application/x-java-jnlp-file"); // please exec me As you might have inferred, we get some confirmation dialog because there's something odd with the SSL certificate. Unfortunately I have no control over that. Assuming that's only temporary and for development purposes as we usually don't get our hands on the production certs. So the SSL cert is expired and doesn't specify the server. And the confirmation dialog. Wouldn't be that bad if it weren't for IE, as other browsers don't care, just ask for confirmation, and execute as expected and consistantly. Please, pretty please, help me, or I might consider sacrificial killings as an option. And I think I just found a decently prized stainless steel golf-club, so I'm right on the edge of gore. Side Notes Might actually be related to IE8 window.open SSL Certificate issue. Though it doesn't explain why the dialog would auto-close (that really is beyong me...), it could help to not have the confirmation dialog and not need the dialog at all. For instance, I was thinking that just having a simple URL in that menu instead of have it entirely managed by GWT code to invoke a Window.open would solve the problem. But I don't have control on that menu, and also I'm very curious how this could be fixed otherwise and why the hell it happens in the first place...

    Read the article

  • PHP File Serving Script: Unreliable Downloads?

    - by JGB146
    This post started as a question on ServerFault ( http://serverfault.com/questions/131156/user-receiving-partial-downloads ) but I determined that our php script was the culprit. So I'm issuing an updated question here about what I believe is the actual issue. I am using a php script to verify permissions and then serve up a file for users of my website to download. Most of the time, this works, but recently one user has been seeing problems with larger downloads. He is only getting ~80% of downloads for files that are 100MB in size. Also, all downloads from this script fail to report a filesize. Further, tests revealed that the same user COULD reliably download each of the failed files if given a direct link (at which point the filesize is reported). Here's the relevant snippet of code that we are using to serve the file: header("Content-type:$contenttype"); $len = filesize($filename); header("Content-Length: $len"); header("Content-Disposition: attachment; filename=".$title.".".$ext); readfile($filename); Note that $contenttype, $filename, $title, and $ext are all set correctly before we get here. These have been triple-checked. None of them are the problem. Also, $len does provide the correct filesize. While researching this issue, I came across this post: http://stackoverflow.com/questions/1334471/content-length-header-always-zero It seems that I am encountering the same issue. When I use the script, I get chunked encoding on the file and no size is set for content-length. I'm hypothesizing that something is going wrong on the large downloads, leading him to get a zero-length chunk before the end of the file. Here's what the headers look like for a direct request: http://www.grinderschool.com/videos/zfff5061b65ae00e8b21/KillsAids021.wmv GET /videos/zfff5061b65ae00e8b21/KillsAids021.wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.grinderschool.com/phpBB3/viewtopic.php?f=14&p=29468 Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 Pragma: no-cache Cache-Control: no-cache HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:57:41 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 Last-Modified: Sun, 04 Apr 2010 12:51:06 GMT Etag: "eb42d6-7d9b843-48368aa6dc280" Accept-Ranges: bytes Content-Length: 131708995 Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Content-Type: video/x-ms-wmv And here's what they look like for the request answered by my script: http://www.grinderschool.com/download_video_test.php?t=KillsAids021&format=wmv GET /download_video_test.php?t=KillsAids021&format=wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:58:02 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 X-Powered-By: PHP/5.2.11 Content-Disposition: attachment; filename=KillsAids021.wmv Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: video/x-ms-wmv So the question is...what can I do to make downloads from the script work properly? Again, for 99% of users, it works as is (though I find it annoying now that no filesize is reported and thus that no time estimate can be computed about the download).

    Read the article

  • What's wrong with Bundler working with RubyGems to push a Git repo to Heroku?

    - by stanigator
    I've made sure that all the files are in the root of the repository as recommended in this discussion. However, as I follow the instructions in this section of the book, I can't get through the section without the problems. What do you think is happening with my system that's causing the error? I have no clue at the moment of what the problem means despite reading the following in the log. Thanks in advance for your help! stanley@ubuntu:~/rails_sample/first_app$ git push heroku master Warning: Permanently added the RSA host key for IP address '50.19.85.156' to the list of known hosts. Counting objects: 96, done. Compressing objects: 100% (79/79), done. Writing objects: 100% (96/96), 28.81 KiB, done. Total 96 (delta 22), reused 0 (delta 0) -----> Heroku receiving push -----> Ruby/Rails app detected -----> Installing dependencies using Bundler version 1.2.0.pre Running: bundle install --without development:test --path vendor/bundle --binstubs bin/ --deployment Fetching gem metadata from https://rubygems.org/....... Installing rake (0.9.2.2) Installing i18n (0.6.0) Installing multi_json (1.3.5) Installing activesupport (3.2.3) Installing builder (3.0.0) Installing activemodel (3.2.3) Installing erubis (2.7.0) Installing journey (1.0.3) Installing rack (1.4.1) Installing rack-cache (1.2) Installing rack-test (0.6.1) Installing hike (1.2.1) Installing tilt (1.3.3) Installing sprockets (2.1.3) Installing actionpack (3.2.3) Installing mime-types (1.18) Installing polyglot (0.3.3) Installing treetop (1.4.10) Installing mail (2.4.4) Installing actionmailer (3.2.3) Installing arel (3.0.2) Installing tzinfo (0.3.33) Installing activerecord (3.2.3) Installing activeresource (3.2.3) Installing coffee-script-source (1.3.3) Installing execjs (1.3.2) Installing coffee-script (2.2.0) Installing rack-ssl (1.3.2) Installing json (1.7.3) with native extensions Installing rdoc (3.12) Installing thor (0.14.6) Installing railties (3.2.3) Installing coffee-rails (3.2.2) Installing jquery-rails (2.0.2) Using bundler (1.2.0.pre) Installing rails (3.2.3) Installing sass (3.1.18) Installing sass-rails (3.2.5) Installing sqlite3 (1.3.6) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite-devel' and check your shared library search path (the location where your sqlite3 shared library is located). *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib --enable-local --disable-local Gem files will remain installed in /tmp/build_3tplrxvj7qa81/vendor/bundle/ruby/1.9.1/gems/sqlite3-1.3.6 for inspection. Results logged to /tmp/build_3tplrxvj7qa81/vendor/bundle/ruby/1.9.1/gems/sqlite3-1.3.6/ext/sqlite3/gem_make.out An error occurred while installing sqlite3 (1.3.6), and Bundler cannot continue. Make sure that `gem install sqlite3 -v '1.3.6'` succeeds before bundling. ! ! Failed to install gems via Bundler. ! ! Heroku push rejected, failed to compile Ruby/rails app To [email protected]:growing-mountain-2788.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:growing-mountain-2788.git' ------Gemfile------------------------ As requested, here's the auto-generated gemfile: source 'https://rubygems.org' gem 'rails', '3.2.3' # Bundle edge Rails instead: # gem 'rails', :git => 'git://github.com/rails/rails.git' gem 'sqlite3' gem 'json' # Gems used only for assets and not required # in production environments by default. group :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' # See https://github.com/sstephenson/execjs#readme for more supported runtimes # gem 'therubyracer', :platform => :ruby gem 'uglifier', '>= 1.0.3' end gem 'jquery-rails' # To use ActiveModel has_secure_password # gem 'bcrypt-ruby', '~> 3.0.0' # To use Jbuilder templates for JSON # gem 'jbuilder' # Use unicorn as the app server # gem 'unicorn' # Deploy with Capistrano # gem 'capistrano' # To use debugger # gem 'ruby-debug'

    Read the article

  • cannot access localhost using ip

    - by Robert
    I have done a small web development project using eclipse. It runs well when I try running it on browser with url localhost:8080/myproject/home.html. But if I want to access it on another machine (laptop, mobile, etc. using the same wifi) it is not possible; it is not able to connect. After Googling for a while found out that I have to use the IP address instead of 'localhost'. So I tried 10.0.0.4:8080/myproject/home.html, but still does not work. In fact i am unable to open that url on the same machine (where localhost:8080/myproject/home.html works fine). I also added a new Inbound rule in control panel firewall settings, allowing access to all ports for protocol TCP. Still have problem in running application with the url 10.0.0.4:8080/myproject/home.html (both on same machine as well as laptop and mobile). FYI i am using Eclipse Indigo, Apache tomcat 6.0 and server.xml file contents is as below: <?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --><!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --><Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener SSLEngine="on" className="org.apache.catalina.core.AprLifecycleListener"/> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener"/> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener"/> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener"/> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"/> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource auth="Container" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" name="UserDatabase" pathname="conf/tomcat-users.xml" type="org.apache.catalina.UserDatabase"/> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" address="10.0.0.4" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443"/> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine defaultHost="localhost" name="Catalina"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host appBase="webapps" autoDeploy="true" name="localhost" unpackWARs="true" xmlNamespaceAware="false" xmlValidation="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <Context docBase="myproject" path="/myproject" reloadable="true" source="org.eclipse.jst.jee.server:myproject"/></Host> </Engine> </Service> </Server>

    Read the article

  • HTML Tables with jQuery Filtering

    - by Bry4n
    Let's say I have... <form action="#"> <fieldset> to:<input type="text" name="search" value="" id="to" /> from:<input type="text" name="search" value="" id="from" /> </fieldset> </form> <table border=1"> <tr class="headers"> <th class="bluedata"height="20px" valign="top">63rd St. &amp; Malvern Av. Loop<BR/></th> <th class="yellowdata"height="20px" valign="top">52nd St. &amp; Lansdowne Av.<BR/></th> <th class="bluedata"height="20px" valign="top">Lancaster &amp; Girard Avs<BR/></th> <th class="yellowdata"height="20px" valign="top">40th St. &amp; Lancaster Av.<BR/></th> <th class="bluedata"height="20px" valign="top">36th &amp; Market Sts<BR/></th> <th class="yellowdata"height="20px" valign="top">Juniper Station<BR/></th> </tr> <tr> <td class="bluedata"height="20px" title="63rd St. &amp; Malvern Av. Loop"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="yellowdata"height="20px" title="52nd St. &amp; Lansdowne Av."> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="bluedata"height="20px" title="Lancaster &amp; Girard Avs"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="yellowdata"height="20px" title="40th St. &amp; Lancaster Av."> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="bluedata"height="20px" title="36th &amp; Market Sts"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="bluedata"height="20px" title="Juniper Station"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> </tr> </table> Now depending upon what data is typed into the textboxes, I need the table trs/tds to show or hide. So if I type in 63rd in "to" box, and juniper in the "from" box, I need only those two trs/tds showing in that order and none of the others.

    Read the article

  • Search function fails because it refers to the wrong controller action?

    - by Christoffer
    My Sunspot search function (sunspot_rails gem) works just fine in my index view, but when I duplicate it to my show view my search breaks... views/supplierproducts/show.html.erb <%= form_tag supplierproducts_path, :method => :get, :id => "supplierproducts_search" do %> <p> <%= text_field_tag :search, params[:search], placeholder: "Search by SKU, product name & EAN number..." %> </p> <div id="supplierproducts"><%= render 'supplierproducts' %></div> <% end %> assets/javascripts/application.js $(function () { $('#supplierproducts th a').live('click', function () { $.getScript(this.href); return false; } ); $('#supplierproducts_search input').keyup(function () { $.get($("#supplierproducts_search").attr("action"), $("#supplierproducts_search").serialize(), null, 'script'); return false; }); }); views/supplierproducts/show.js.erb $('#supplierproducts').html('<%= escape_javascript(render("supplierproducts")) %>'); views/supplierproducts/_supplierproducts.hmtl.erb <%= hidden_field_tag :direction, params[:direction] %> <%= hidden_field_tag :sort, params[:sort] %> <table class="table table-bordered"> <thead> <tr> <th><%= sortable "sku", "SKU" %></th> <th><%= sortable "name", "Product name" %></th> <th><%= sortable "stock", "Stock" %></th> <th><%= sortable "price", "Price" %></th> <th><%= sortable "ean", "EAN number" %></th> </tr> </thead> <% for supplierproduct in @supplier.supplierproducts %> <tbody> <tr> <td><%= supplierproduct.sku %></td> <td><%= supplierproduct.name %></td> <td><%= supplierproduct.stock %></td> <td><%= supplierproduct.price %></td> <td><%= supplierproduct.ean %></td> </tr> </tbody> <% end %> </table> controllers/supplierproducts_controller.rb class SupplierproductsController < ApplicationController helper_method :sort_column, :sort_direction def show @supplier = Supplier.find(params[:id]) @search = @supplier.supplierproducts.search do fulltext params[:search] end @supplierproducts = @search.results end end private def sort_column Supplierproduct.column_names.include?(params[:sort]) ? params[:sort] : "name" end def sort_direction %w[asc desc].include?(params[:direction]) ? params[:direction] : "asc" end models/supplierproduct.rb class Supplierproduct < ActiveRecord::Base attr_accessible :ean, :name, :price, :sku, :stock, :supplier_id belongs_to :supplier validates :supplier_id, presence: true searchable do text :ean, :name, :sku end end Visiting show.html.erb works just fine. Log shows: Started GET "/supplierproducts/2" for 127.0.0.1 at 2012-06-24 13:44:52 +0200 Processing by SupplierproductsController#show as HTML Parameters: {"id"=>"2"} Supplier Load (0.1ms) SELECT "suppliers".* FROM "suppliers" WHERE "suppliers"."id" = ? LIMIT 1 [["id", "2"]] SOLR Request (252.9ms) [ path=#<RSolr::Client:0x007fa5880b8e68> parameters={data: fq=type%3ASupplierproduct&start=0&rows=30&q=%2A%3A%2A, method: post, params: {:wt=>:ruby}, query: wt=ruby, headers: {"Content-Type"=>"application/x-www-form-urlencoded; charset=UTF-8"}, path: select, uri: http://localhost:8982/solr/select?wt=ruby, open_timeout: , read_timeout: } ] Supplierproduct Load (0.2ms) SELECT "supplierproducts".* FROM "supplierproducts" WHERE "supplierproducts"."id" IN (1) Supplierproduct Load (0.1ms) SELECT "supplierproducts".* FROM "supplierproducts" WHERE "supplierproducts"."supplier_id" = 2 Rendered supplierproducts/_supplierproducts.html.erb (2.2ms) Rendered supplierproducts/show.html.erb within layouts/application (3.3ms) Rendered layouts/_shim.html.erb (0.0ms) User Load (0.1ms) SELECT "users".* FROM "users" WHERE "users"."remember_token" = 'zMrtTbDun2MjMHRApSthCQ' LIMIT 1 Rendered layouts/_header.html.erb (2.1ms) Rendered layouts/_footer.html.erb (0.2ms) Completed 200 OK in 278ms (Views: 20.6ms | ActiveRecord: 0.6ms | Solr: 252.9ms) But it breaks when I type in a search. Log shows: Started GET "/supplierproducts?utf8=%E2%9C%93&search=a&direction=&sort=&_=1340538830635" for 127.0.0.1 at 2012-06-24 13:53:50 +0200 Processing by SupplierproductsController#index as JS Parameters: {"utf8"=>"?", "search"=>"a", "direction"=>"", "sort"=>"", "_"=>"1340538830635"} Rendered supplierproducts/_supplierproducts.html.erb (2.4ms) Rendered supplierproducts/index.js.erb (2.9ms) Completed 500 Internal Server Error in 6ms ActionView::Template::Error (undefined method `supplierproducts' for nil:NilClass): 10: <th><%= sortable "ean", "EAN number" %></th> 11: </tr> 12: </thead> 13: <% for supplierproduct in @supplier.supplierproducts %> 14: <tbody> 15: <tr> 16: <td><%= supplierproduct.sku %></td> app/views/supplierproducts/_supplierproducts.html.erb:13:in `_app_views_supplierproducts__supplierproducts_html_erb___2251600857885474606_70174444831200' app/views/supplierproducts/index.js.erb:1:in `_app_views_supplierproducts_index_js_erb___1613906916161905600_70174464073480' Rendered /Users/Computer/.rvm/gems/ruby-1.9.3-p194@myapp/gems/actionpack-3.2.3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (33.3ms) Rendered /Users/Computer/.rvm/gems/ruby-1.9.3-p194@myapp/gems/actionpack-3.2.3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (0.9ms) Rendered /Users/Computer/.rvm/gems/ruby-1.9.3-p194@myapp/gems/actionpack-3.2.3/lib/action_dispatch/middleware/templates/rescues/template_error.erb within rescues/layout (39.7ms)

    Read the article

  • I want to keep the values on textbox after onchange function

    - by user1908045
    hello i have problem with this code..I want to keep the values of the textboxes when the pageload and didnt write again the values. I have two drop down list. the First one is the country when the country selected then the page load and appear the city to select but afte the pageload the values on textbox is empty. I want to keep the values of textbox when the page load. This is the code <head> <script type="text/javascript"> function Load_id() { var Count = document.getElementById("Count").value; var Count_txt = "?Count=" location = Count_txt + Count } </script> <meta charset="UTF-8"> </head> <body> <div class="main"> <div class="headers"> <table> <tr><td rowspan="2"><img alt="unipi" src="/Images/logo.jpeg" height="75" width="52"></td> <td>University</td></tr> <tr><td>Data</td></tr> </table> </div> <div class="form"> <h3>Personal</h3><br/><br/><br/> <form id="Page1" name="Page1" action="Form1Sub.php" method="Post"> <table style="width:520px;text-align:left;"> <tr><td><label>Number:</label></td> <td><input type="text" required="required" id="AM" name="AM" value=""/></td> </tr> <tr><td><label>Name:</label></td> <td><input type="text" required="required" name="Name"/></td> </tr> <?php $host="localhost"; $username=""; $password=""; $dbName="Database"; $connection = mysql_connect($host, $username, $password) or die("Couldn't Connect to the Server"); $db = mysql_select_db($dbName, $connection) or die("cannot select DataBase"); $Count = $_GET['Count']; echo "<tr><td><label>Country</label></td>\n"; $country = mysql_query("select DISTINCT Country FROM lut_country_city "); echo " <td><select id=\"Count\" name=\"cat\" onChange=\"Load_id(this)\">\n"; echo " `<option>Select Country</option>\n"; while($nt=mysql_fetch_array($country)){ $selected = ($nt["Country"] == $Count)? "SELECTED":""; echo"<option value=\"".$nt['Country']."\"". $selected." >".$nt['Country']."</option>"; } echo " </select></td></tr>\n"; echo"<tr><td><label>City:</label></td>\n"; $q2 = mysql_query("Select id,City,Country FROM lut_country_city WHERE Country = '$Count'"); echo"<td><select name=\"SelectCity\">\n"; while($row = mysql_fetch_array($q2)) { echo"<option value=\"".$row['id']."\">".$row['City']."</option>"; } echo " </select></td></tr>\n"; ?> </table> <p> <button type="submit" id="Next">Next</button> </form> <form id="form1" action="index.php"> <button id="Back" type="submit">Back</button> </form> </p> </div> </div> </body> </html>

    Read the article

  • Stacked up with web service configuration

    - by Allan Chua
    I'm currently stacked with the web service that im creating right now. when Testing it in local it all works fine but when I try to deploy it to the web server it throws me the following error An error occurred while trying to make a request to URI '...my web service URI here....'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. here is my web config. <?xml version="1.0"?> <configuration> <configSections> </configSections> <system.webServer> <modules runAllManagedModulesForAllRequests="true"> </modules> <validation validateIntegratedModeConfiguration="false" /> <security> <requestFiltering> <requestLimits maxAllowedContentLength="2000000000" /> </requestFiltering> </security> </system.webServer> <connectionStrings> <add name="........" providerName="System.Data.SqlClient" /> </connectionStrings> <appSettings> <!-- Testing --> <add key="DataConnectionString" value="..........." /> </appSettings> <system.web> <compilation debug="true" targetFramework="4.0"> <buildProviders> <add extension=".rdlc" type="Microsoft.Reporting.RdlBuildProvider, Microsoft.ReportViewer.WebForms, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </buildProviders> </compilation> <httpRuntime executionTimeout="1200" maxRequestLength="2000000" /> </system.web> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <behaviors> <serviceBehaviors> <behavior name="Service1"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="2000000000" /> </behavior> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> <behavior name="nextSPOTServiceBehavior"> <serviceMetadata httpsGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="2000000000" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <basicHttpBinding> <binding name="SecureBasic" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="Transport" /> <readerQuotas maxArrayLength="2000000" maxStringContentLength="2000000"/> </binding> <binding name="BasicHttpBinding_IDownloadManagerService" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="Transport" /> </binding> </basicHttpBinding> </bindings> <services> <service behaviorConfiguration="nextSPOTServiceBehavior" name="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.DownloadManagerService"> <endpoint binding="basicHttpBinding" bindingConfiguration="SecureBasic" name="basicHttpSecure" contract="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.IDownloadManagerService" /> <!--<endpoint binding="basicHttpBinding" bindingConfiguration="" name="basicHttp" contract="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.IDownloadManagerService" />--> <!--<endpoint address="" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IDownloadManagerService" contract="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.IDownloadManagerService" /> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" />--> </service> </services > </system.serviceModel> </configuration>

    Read the article

  • Custom Section Name Crashing NSFetchedResultsController

    - by Mike H.
    I have a managed object with a dueDate attribute. Instead of displaying using some ugly date string as the section headers of my UITableView I created a transient attribute called "category" and defined it like so: - (NSString*)category { [self willAccessValueForKey:@"category"]; NSString* categoryName; if ([self isOverdue]) { categoryName = @"Overdue"; } else if ([self.finishedDate != nil]) { categoryName = @"Done"; } else { categoryName = @"In Progress"; } [self didAccessValueForKey:@"category"]; return categoryName; } Here is the NSFetchedResultsController set up: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Task" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSMutableArray* descriptors = [[NSMutableArray alloc] init]; NSSortDescriptor *dueDateDescriptor = [[NSSortDescriptor alloc] initWithKey:@"dueDate" ascending:YES]; [descriptors addObject:dueDateDescriptor]; [dueDateDescriptor release]; [fetchRequest setSortDescriptors:descriptors]; fetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:@"category" cacheName:@"Root"]; The table initially displays fine, showing the unfinished items whose dueDate has not passed in a section titled "In Progress". Now, the user can tap a row in the table view which pushes a new details view onto the navigation stack. In this new view the user can tap a button to indicate that the item is now "Done". Here is the handler for the button (self.task is the managed object): - (void)taskDoneButtonTapped { self.task.finishedDate = [NSDate date]; } As soon as the value of the "finishedDate" attribute changes I'm hit with this exception: 2010-03-18 23:29:52.476 MyApp[1637:207] Serious application error. Exception was caught during Core Data change processing: no section named 'Done' found with userInfo (null) 2010-03-18 23:29:52.477 MyApp[1637:207] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'no section named 'Done' found' I've managed to figure out that the UITableView that is currently hidden by the new details view is trying to update its rows and sections because the NSFetchedResultsController was notified that something changed in the data set. Here's my table update code (copied from either the Core Data Recipes sample or the CoreBooks sample -- I can't remember which): - (void)controllerWillChangeContent:(NSFetchedResultsController *)controller { [self.tableView beginUpdates]; } - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath { switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeUpdate: [self configureCell:[self.tableView cellForRowAtIndexPath:indexPath] atIndexPath:indexPath]; break; case NSFetchedResultsChangeMove: [self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; // Reloading the section inserts a new row and ensures that titles are updated appropriately. [self.tableView reloadSections:[NSIndexSet indexSetWithIndex:newIndexPath.section] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id <NSFetchedResultsSectionInfo>)sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type { switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { [self.tableView endUpdates]; } I put breakpoints in each of these functions and found that only controllerWillChange is called. The exception is thrown before either controller:didChangeObject:atIndexPath:forChangeType:newIndex or controller:didChangeSection:atIndex:forChangeType are called. At this point I'm stuck. If I change my sectionNameKeyPath to just "dueDate" then everything works fine. I think that's because the dueDate attribute never changes whereas the category will be different when read back after the finishedDate attribute changes. Please help! UPDATE: Here is my UITableViewDataSource code: - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return [[self.fetchedResultsController sections] count]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { id <NSFetchedResultsSectionInfo> sectionInfo = [[self.fetchedResultsController sections] objectAtIndex:section]; return [sectionInfo numberOfObjects]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [self.tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } [self configureCell:cell atIndexPath:indexPath]; return cell; } - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { id <NSFetchedResultsSectionInfo> sectionInfo = [[self.fetchedResultsController sections] objectAtIndex:section]; return [sectionInfo name]; }

    Read the article

  • Why this code is not working on linux server ?

    - by user488001
    Hello Experts, I am new in Zend Framework, and this code is use for downloading contents. This code is working in localhost but when i tried to execute in linux server it shows error file not found. public function downloadAnnouncementsAction() { $file= $this-_getParam('file'); $file = str_replace("%2F","/",$this-_getParam('file')); // Allow direct file download (hotlinking)? // Empty - allow hotlinking // If set to nonempty value (Example: example.com) will only allow downloads when referrer contains this text define('ALLOWED_REFERRER', ''); // Download folder, i.e. folder where you keep all files for download. // MUST end with slash (i.e. "/" ) define('BASE_DIR','file_upload'); // log downloads? true/false define('LOG_DOWNLOADS',true); // log file name define('LOG_FILE','downloads.log'); // Allowed extensions list in format 'extension' => 'mime type' // If myme type is set to empty string then script will try to detect mime type // itself, which would only work if you have Mimetype or Fileinfo extensions // installed on server. $allowed_ext = array ( // audio 'mp3' => 'audio/mpeg', 'wav' => 'audio/x-wav', // video 'mpeg' => 'video/mpeg', 'mpg' => 'video/mpeg', 'mpe' => 'video/mpeg', 'mov' => 'video/quicktime', 'avi' => 'video/x-msvideo' ); // If hotlinking not allowed then make hackers think there are some server problems if (ALLOWED_REFERRER !== '' && (!isset($_SERVER['HTTP_REFERER']) || strpos(strtoupper($_SERVER['HTTP_REFERER']),strtoupper(ALLOWED_REFERRER)) === false) ) { die("Internal server error. Please contact system administrator."); } // Make sure program execution doesn't time out // Set maximum script execution time in seconds (0 means no limit) set_time_limit(0); if (!isset($file) || empty($file)) { die("Please specify file name for download."); } // Nullbyte hack fix if (strpos($file, "\0") !== FALSE) die(''); // Get real file name. // Remove any path info to avoid hacking by adding relative path, etc. $fname = basename($file); // Check if the file exists // Check in subfolders too function find_file ($dirname, $fname, &$file_path) { $dir = opendir($dirname); while ($file = readdir($dir)) { if (empty($file_path) && $file != '.' && $file != '..') { if (is_dir($dirname.'/'.$file)) { find_file($dirname.'/'.$file, $fname, $file_path); } else { if (file_exists($dirname.'/'.$fname)) { $file_path = $dirname.'/'.$fname; return; } } } } } // find_file // get full file path (including subfolders) $file_path = ''; find_file(BASE_DIR, $fname, $file_path); if (!is_file($file_path)) { die("File does not exist. Make sure you specified correct file name."); } // file size in bytes $fsize = filesize($file_path); // file extension $fext = strtolower(substr(strrchr($fname,"."),1)); // check if allowed extension if (!array_key_exists($fext, $allowed_ext)) { die("Not allowed file type."); } // get mime type if ($allowed_ext[$fext] == '') { $mtype = ''; // mime type is not set, get from server settings if (function_exists('mime_content_type')) { $mtype = mime_content_type($file_path); } else if (function_exists('finfo_file')) { $finfo = finfo_open(FILEINFO_MIME); // return mime type $mtype = finfo_file($finfo, $file_path); finfo_close($finfo); } if ($mtype == '') { $mtype = "application/force-download"; } } else { // get mime type defined by admin $mtype = $allowed_ext[$fext]; } // Browser will try to save file with this filename, regardless original filename. // You can override it if needed. if (!isset($_GET['fc']) || empty($_GET['fc'])) { $asfname = $fname; } else { // remove some bad chars $asfname = str_replace(array('"',"'",'\\','/'), '', $_GET['fc']); if ($asfname === '') $asfname = 'NoName'; } // set headers header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Cache-Control: public"); header("Content-Description: File Transfer"); header("Content-Type: $mtype"); header("Content-Disposition: attachment; filename=\"$asfname\""); header("Content-Transfer-Encoding: binary"); header("Content-Length: " . $fsize); // download // @readfile($file_path); $file = @fopen($file_path,"rb"); if ($file) { while(!feof($file)) { print(fread($file, 1024*8)); flush(); if (connection_status()!=0) { @fclose($file); die(); } } @fclose($file); } // log downloads if (!LOG_DOWNLOADS) die(); $f = @fopen(LOG_FILE, 'a+'); if ($f) { @fputs($f, date("m.d.Y g:ia")." ".$_SERVER['REMOTE_ADDR']." ".$fname."\n"); @fclose($f); } } please Help...

    Read the article

  • PHP setcookie warning

    - by Ranking
    Hello guys, I have a problem with 'setcookie' in PHP and I can't solve it. so I receive this error "Warning: Cannot modify header information - headers already sent by (output started at C:\Program Files\VertrigoServ\www\vote.php:14) in C:\Program Files\VertrigoServ\www\vote.php on line 86" and here is the file.. line 86 is setcookie ($cookie_name, 1, time()+86400, '/', '', 0); is there any other way to do this ?? <html> <head> <title>Ranking</title> <link href="style.css" rel="stylesheet" type="text/css"> </head> <body bgcolor="#EEF0FF"> <div align="center"> <br/> <div align="center"><div id="header"></div></div> <br/> <table width="800" border="0" align="center" cellpadding="5" cellspacing="0" class="mid-table"> <tr><td height="5"> <center> <table border="0" cellpadding="0" cellspacing="0" align="center" style="padding-top:5px;"> <tr> <td align="center" valign="top"><img src="images/ads/top_banner.png"></td> </tr> </table> </center> </td></tr> <tr><td height="5"></td></tr> </table> <br/> <?php include "conf.php"; $id = $_GET['id']; if (!isset($_POST['submitted'])) { if (isset($_GET['id']) && is_numeric($_GET['id'])) { $id = mysql_real_escape_string($_GET['id']); $query = mysql_query("SELECT SQL_CACHE id, name FROM s_servers WHERE id = $id"); $row = mysql_fetch_assoc($query); ?> <form action="" method="POST"> <table width="800" height="106" border="0" align="center" cellpadding="3" cellspacing="0" class="mid-table"> <tr><td><div align="center"> <p>Code: <input type="text" name="kod" class="port" /><img src="img.php" id="captcha2" alt="" /><a href="javascript:void(0);" onclick="document.getElementById('captcha2').src = document.getElementById('captcha2').src + '?' + (new Date()).getMilliseconds()">Refresh</a></p><br /> <p><input type="submit" class="vote-button" name="vote" value="Vote for <?php echo $row['name']; ?>" /></p> <input type="hidden" name="submitted" value="TRUE" /> <input type="hidden" name="id" value="<?php echo $row['id']; ?>" /> </div></td></tr> <tr><td align="center" valign="top"><img src="images/ads/top_banner.png"></td></tr> </table> </form> <?php } else { echo '<font color="red">You must select a valid server to vote for it!</font>'; } } else { $kod=$_POST['kod']; if($kod!=$_COOKIE[imgcodepage]) { echo "The code does not match"; } else { $id = mysql_real_escape_string($_POST['id']); $query = "SELECT SQL_CACHE id, votes FROM s_servers WHERE id = $id"; $result = mysql_query($query) OR die(mysql_error()); $row = mysql_fetch_array($result, MYSQL_ASSOC); $votes = $row['votes']; $id = $row['id']; $cookie_name = 'vote_'.$id; $ip = $_SERVER['REMOTE_ADDR']; $ltime = mysql_fetch_assoc(mysql_query("SELECT SQL_CACHE `time` FROM `s_votes` WHERE `sid`='$id' AND `ip`='$ip'")); $ltime = $ltime['time'] + 86400; $time = time(); if (isset($_COOKIE['vote_'.$id]) OR $ltime > $time) { echo 'You have already voted in last 24 hours! Your vote is not recorded.'; } else { $votes++; $query = "UPDATE s_servers SET votes = $votes WHERE id = $id"; $time = time(); $query2 = mysql_query("INSERT INTO `s_votes` (`ip`, `time`, `sid`) VALUES ('$ip', '$time', '$id')"); $result = mysql_query($query) OR die(mysql_error()); setcookie ($cookie_name, 1, time()+86400, '/', '', 0); } } } ?> <p><a href="index.php">[Click here if you don't want to vote]</a></p><br/> <p><a href="index.php">Ranking.net</a> &copy; 2010-2011<br> </p> </div> </body> </html> Thanks a lot!

    Read the article

  • jQuery Globalization Plugin from Microsoft

    - by ScottGu
    Last month I blogged about how Microsoft is starting to make code contributions to jQuery, and about some of the first code contributions we were working on: jQuery Templates and Data Linking support. Today, we released a prototype of a new jQuery Globalization Plugin that enables you to add globalization support to your JavaScript applications. This plugin includes globalization information for over 350 cultures ranging from Scottish Gaelic, Frisian, Hungarian, Japanese, to Canadian English.  We will be releasing this plugin to the community as open-source. You can download our prototype for the jQuery Globalization plugin from our Github repository: http://github.com/nje/jquery-glob You can also download a set of samples that demonstrate some simple use-cases with it here. Understanding Globalization The jQuery Globalization plugin enables you to easily parse and format numbers, currencies, and dates for different cultures in JavaScript. For example, you can use the Globalization plugin to display the proper currency symbol for a culture: You also can use the Globalization plugin to format dates so that the day and month appear in the right order and the day and month names are correctly translated: Notice above how the Arabic year is displayed as 1431. This is because the year has been converted to use the Arabic calendar. Some cultural differences, such as different currency or different month names, are obvious. Other cultural differences are surprising and subtle. For example, in some cultures, the grouping of numbers is done unevenly. In the "te-IN" culture (Telugu in India), groups have 3 digits and then 2 digits. The number 1000000 (one million) is written as "10,00,000". Some cultures do not group numbers at all. All of these subtle cultural differences are handled by the jQuery Globalization plugin automatically. Getting dates right can be especially tricky. Different cultures have different calendars such as the Gregorian and UmAlQura calendars. A single culture can even have multiple calendars. For example, the Japanese culture uses both the Gregorian calendar and a Japanese calendar that has eras named after Japanese emperors. The Globalization Plugin includes methods for converting dates between all of these different calendars. Using Language Tags The jQuery Globalization plugin uses the language tags defined in the RFC 4646 and RFC 5646 standards to identity cultures (see http://tools.ietf.org/html/rfc5646). A language tag is composed out of one or more subtags separated by hyphens. For example: Language Tag Language Name (in English) en-AU English (Australia) en-BZ English (Belize) en-CA English (Canada) Id Indonesian zh-CHS Chinese (Simplified) Legacy Zu isiZulu Notice that a single language, such as English, can have several language tags. Speakers of English in Canada format numbers, currencies, and dates using different conventions than speakers of English in Australia or the United States. You can find the language tag for a particular culture by using the Language Subtag Lookup tool located here:  http://rishida.net/utils/subtags/ The jQuery Globalization plugin download includes a folder named globinfo that contains the information for each of the 350 cultures. Actually, this folder contains more than 700 files because the folder includes both minified and un-minified versions of each file. For example, the globinfo folder includes JavaScript files named jQuery.glob.en-AU.js for English Australia, jQuery.glob.id.js for Indonesia, and jQuery.glob.zh-CHS for Chinese (Simplified) Legacy. Example: Setting a Particular Culture Imagine that you have been asked to create a German website and want to format all of the dates, currencies, and numbers using German formatting conventions correctly in JavaScript on the client. The HTML for the page might look like this: Notice the span tags above. They mark the areas of the page that we want to format with the Globalization plugin. We want to format the product price, the date the product is available, and the units of the product in stock. To use the jQuery Globalization plugin, we’ll add three JavaScript files to the page: the jQuery library, the jQuery Globalization plugin, and the culture information for a particular language: In this case, I’ve statically added the jQuery.glob.de-DE.js JavaScript file that contains the culture information for German. The language tag “de-DE” is used for German as spoken in Germany. Now that I have all of the necessary scripts, I can use the Globalization plugin to format the product price, date available, and units in stock values using the following client-side JavaScript: The jQuery Globalization plugin extends the jQuery library with new methods - including new methods named preferCulture() and format(). The preferCulture() method enables you to set the default culture used by the jQuery Globalization plugin methods. Notice that the preferCulture() method accepts a language tag. The method will find the closest culture that matches the language tag. The $.format() method is used to actually format the currencies, dates, and numbers. The second parameter passed to the $.format() method is a format specifier. For example, passing “c” causes the value to be formatted as a currency. The ReadMe file at github details the meaning of all of the various format specifiers: http://github.com/nje/jquery-glob When we open the page in a browser, everything is formatted correctly according to German language conventions. A euro symbol is used for the currency symbol. The date is formatted using German day and month names. Finally, a period instead of a comma is used a number separator: You can see a running example of the above approach with the 3_GermanSite.htm file in this samples download. Example: Enabling a User to Dynamically Select a Culture In the previous example we explicitly said that we wanted to globalize in German (by referencing the jQuery.glob.de-DE.js file). Let’s now look at the first of a few examples that demonstrate how to dynamically set the globalization culture to use. Imagine that you want to display a dropdown list of all of the 350 cultures in a page. When someone selects a culture from the dropdown list, you want all of the dates in the page to be formatted using the selected culture. Here’s the HTML for the page: Notice that all of the dates are contained in a <span> tag with a data-date attribute (data-* attributes are a new feature of HTML 5 that conveniently also still work with older browsers). We’ll format the date represented by the data-date attribute when a user selects a culture from the dropdown list. In order to display dates for any possible culture, we’ll include the jQuery.glob.all.js file like this: The jQuery Globalization plugin includes a JavaScript file named jQuery.glob.all.js. This file contains globalization information for all of the more than 350 cultures supported by the Globalization plugin.  At 367KB minified, this file is not small. Because of the size of this file, unless you really need to use all of these cultures at the same time, we recommend that you add the individual JavaScript files for particular cultures that you intend to support instead of the combined jQuery.glob.all.js to a page. In the next sample I’ll show how to dynamically load just the language files you need. Next, we’ll populate the dropdown list with all of the available cultures. We can use the $.cultures property to get all of the loaded cultures: Finally, we’ll write jQuery code that grabs every span element with a data-date attribute and format the date: The jQuery Globalization plugin’s parseDate() method is used to convert a string representation of a date into a JavaScript date. The plugin’s format() method is used to format the date. The “D” format specifier causes the date to be formatted using the long date format. And now the content will be globalized correctly regardless of which of the 350 languages a user visiting the page selects.  You can see a running example of the above approach with the 4_SelectCulture.htm file in this samples download. Example: Loading Globalization Files Dynamically As mentioned in the previous section, you should avoid adding the jQuery.glob.all.js file to a page whenever possible because the file is so large. A better alternative is to load the globalization information that you need dynamically. For example, imagine that you have created a dropdown list that displays a list of languages: The following jQuery code executes whenever a user selects a new language from the dropdown list. The code checks whether the globalization file associated with the selected language has already been loaded. If the globalization file has not been loaded then the globalization file is loaded dynamically by taking advantage of the jQuery $.getScript() method. The globalizePage() method is called after the requested globalization file has been loaded, and contains the client-side code to perform the globalization. The advantage of this approach is that it enables you to avoid loading the entire jQuery.glob.all.js file. Instead you only need to load the files that you need and you don’t need to load the files more than once. The 5_Dynamic.htm file in this samples download demonstrates how to implement this approach. Example: Setting the User Preferred Language Automatically Many websites detect a user’s preferred language from their browser settings and automatically use it when globalizing content. A user can set a preferred language for their browser. Then, whenever the user requests a page, this language preference is included in the request in the Accept-Language header. When using Microsoft Internet Explorer, you can set your preferred language by following these steps: Select the menu option Tools, Internet Options. Select the General tab. Click the Languages button in the Appearance section. Click the Add button to add a new language to the list of languages. Move your preferred language to the top of the list. Notice that you can list multiple languages in the Language Preference dialog. All of these languages are sent in the order that you listed them in the Accept-Language header: Accept-Language: fr-FR,id-ID;q=0.7,en-US;q=0.3 Strangely, you cannot retrieve the value of the Accept-Language header from client JavaScript. Microsoft Internet Explorer and Mozilla Firefox support a bevy of language related properties exposed by the window.navigator object, such as windows.navigator.browserLanguage and window.navigator.language, but these properties represent either the language set for the operating system or the language edition of the browser. These properties don’t enable you to retrieve the language that the user set as his or her preferred language. The only reliable way to get a user’s preferred language (the value of the Accept-Language header) is to write server code. For example, the following ASP.NET page takes advantage of the server Request.UserLanguages property to assign the user’s preferred language to a client JavaScript variable named acceptLanguage (which then allows you to access the value using client-side JavaScript): In order for this code to work, the culture information associated with the value of acceptLanguage must be included in the page. For example, if someone’s preferred culture is fr-FR (French in France) then you need to include either the jQuery.glob.fr-FR.js or the jQuery.glob.all.js JavaScript file in the page or the culture information won’t be available.  The “6_AcceptLanguages.aspx” sample in this samples download demonstrates how to implement this approach. If the culture information for the user’s preferred language is not included in the page then the $.preferCulture() method will fall back to using the neutral culture (for example, using jQuery.glob.fr.js instead of jQuery.glob.fr-FR.js). If the neutral culture information is not available then the $.preferCulture() method falls back to the default culture (English). Example: Using the Globalization Plugin with the jQuery UI DatePicker One of the goals of the Globalization plugin is to make it easier to build jQuery widgets that can be used with different cultures. We wanted to make sure that the jQuery Globalization plugin could work with existing jQuery UI plugins such as the DatePicker plugin. To that end, we created a patched version of the DatePicker plugin that can take advantage of the Globalization plugin when rendering a calendar. For example, the following figure illustrates what happens when you add the jQuery Globalization and the patched jQuery UI DatePicker plugin to a page and select Indonesian as the preferred culture: Notice that the headers for the days of the week are displayed using Indonesian day name abbreviations. Furthermore, the month names are displayed in Indonesian. You can download the patched version of the jQuery UI DatePicker from our github website. Or you can use the version included in this samples download and used by the 7_DatePicker.htm sample file. Summary I’m excited about our continuing participation in the jQuery community. This Globalization plugin is the third jQuery plugin that we’ve released. We’ve really appreciated all of the great feedback and design suggestions on the jQuery templating and data-linking prototypes that we released earlier this year.  We also want to thank the jQuery and jQuery UI teams for working with us to create these plugins. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. You can follow me at: twitter.com/scottgu

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

< Previous Page | 131 132 133 134 135 136 137  | Next Page >