Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 232/1018 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Best full text search for mysql?

    - by ConroyP
    We're currently running MySQL on a LAMP stack and have been looking at implementing a more thorough, full-text search on our site. We've looked at MySQL's own freetext search, but it doesn't seem to cope well with large databases, which makes it far too slow for our needs. Our main requirements are: speed returning results simple updating of index In addition to the above, our "nice to have"s are: ideally not something that requires adding a module to MySQL plays nicely with PHP (majority of our dev work done using PHP) There seems to be quite a few healthy open-source projects to add fast, reliable full-text search to MySQL, so I'm basically looking for recommendations/suggestions on what you've found to be the most useful product out there, easiest to set up, etc. So far, the list of ones we've been starting to play around with are: Sphinx, C++ based, used by craigslist, thepiratebay Lucene, Java-based Apache project, powers zeoh.com and zoomf.com Solr, Java-based offshoot of Lucene, used to power searches on Digg, CNet & AOL Channels Are there any better ones out there that we haven't come across yet? Can you recommend / suggest against any of the options we've gathered so far? Thanks for your help! Update @Cletus suggested Google's Custom Search Engine. We recently trialled this on a couple of projects, and it's an almost-perfect fit for our needs. The problem is that entries on our site are updated quite regularly, and unfortunately the speed at which entries go in/get updated in Google's index was just too slow and erratic for us to rely on, even with the addition of sitemaps and requested crawl rate changes.

    Read the article

  • Configuring varnish and django (apache/modwsgi)

    - by Hedde
    I am trying to work out why my application keeps hitting the database while I have setup varnish infront of apache. I think I am missing some vital configuration, any tips are welcome This is my curl result: HTTP/1.1 200 OK Server: Apache/2.2.16 (Debian) Content-Language: en-us Vary: Accept,Accept-Encoding,Accept-Language,Cookie Cache-Control: s-maxage=60, no-transform, max-age=60 Content-Type: application/json; charset=utf-8 Date: Sat, 15 Sep 2012 08:19:17 GMT Connection: keep-alive My varnishlog: 13 BackendClose - apache 13 BackendOpen b apache 127.0.0.1 47665 127.0.0.1 8000 13 TxRequest b GET 13 TxURL b /api/v1/events/?format=json 13 TxProtocol b HTTP/1.1 13 TxHeader b User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 13 TxHeader b Host: foobar.com 13 TxHeader b Accept: */* 13 TxHeader b X-Forwarded-For: 92.64.200.145 13 TxHeader b X-Varnish: 979305817 13 TxHeader b Accept-Encoding: gzip 13 RxProtocol b HTTP/1.1 13 RxStatus b 200 13 RxResponse b OK 13 RxHeader b Date: Sat, 15 Sep 2012 08:21:28 GMT 13 RxHeader b Server: Apache/2.2.16 (Debian) 13 RxHeader b Content-Language: en-us 13 RxHeader b Content-Encoding: gzip 13 RxHeader b Vary: Accept,Accept-Encoding,Accept-Language,Cookie 13 RxHeader b Cache-Control: s-maxage=60, no-transform, max-age=60 13 RxHeader b Content-Length: 6399 13 RxHeader b Content-Type: application/json; charset=utf-8 13 Fetch_Body b 4(length) cls 0 mklen 1 13 Length b 6399 13 BackendReuse b apache 11 SessionOpen c 92.64.200.145 53236 :80 11 ReqStart c 92.64.200.145 53236 979305817 11 RxRequest c HEAD 11 RxURL c /api/v1/events/?format=json 11 RxProtocol c HTTP/1.1 11 RxHeader c User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 11 RxHeader c Host: foobar.com 11 RxHeader c Accept: */* 11 VCL_call c recv lookup 11 VCL_call c hash 11 Hash c /api/v1/events/?format=json 11 Hash c foobar.com 11 VCL_return c hash 11 VCL_call c miss fetch 11 Backend c 13 apache apache 11 TTL c 979305817 RFC 60 -1 -1 1347697289 0 1347697288 0 60 11 VCL_call c fetch deliver 11 ObjProtocol c HTTP/1.1 11 ObjResponse c OK 11 ObjHeader c Date: Sat, 15 Sep 2012 08:21:28 GMT 11 ObjHeader c Server: Apache/2.2.16 (Debian) 11 ObjHeader c Content-Language: en-us 11 ObjHeader c Content-Encoding: gzip 11 ObjHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 ObjHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 ObjHeader c Content-Type: application/json; charset=utf-8 11 Gzip c u F - 6399 69865 80 80 51128 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 200 11 TxResponse c OK 11 TxHeader c Server: Apache/2.2.16 (Debian) 11 TxHeader c Content-Language: en-us 11 TxHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 TxHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 TxHeader c Content-Type: application/json; charset=utf-8 11 TxHeader c Date: Sat, 15 Sep 2012 08:21:29 GMT 11 TxHeader c Connection: keep-alive 11 Length c 0 11 ReqEnd c 979305817 1347697288.292612076 1347697289.456128597 0.000086784 1.163468122 0.000048399

    Read the article

  • Turing Model Vs Von Neuman model

    - by Santhosh
    First some background (based on my understanding).. The Von-Neumann architecture describes the stored-program computer where instructions and data are stored in memory and the machine works by changing it's internal state, i.e an instruction operated on some data and modifies the data. So inherently, there is state msintained in the system. The Turing machine architecture works by manipulating symbols on a tape. i.e A tape with infinite number of slots exists, and at any one point in time, the Turing machine is in a particular slot. Based on the symbol read at that slot, the machine change the symbol and move to a different slot. All of this is deterministic. My questions are Is there any relation between these two models (Was the Von Neuman model based on or inspired by the Turing model)? Can we say that Turing model is a superset of Von Newman model? Does functional Programming fit into Turing model. If so how? (I assume FP does not lend itself nicely to the Von Neuman model)

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • Lync 2010, Kamailio, & Trixbox 2.6.23 (Asterisk 1.4)

    - by slashp
    I'm having an issue trying to connect Lync 2010 phone calls with our trixbox PBX. I've gotten to the point where Kamailio seems to be functioning properly and acting as a bridge between TCP traffic (from Lync) & UDP traffic (to the trixbox, as Asterisk 1.4 does not support SIP over TCP). Our Lync box IP: 10.100.10.41 Our Kamailio box IP: 10.100.10.44 Our trixbox IP: 10.100.10.2 The issue I'm running into is as follows when enabling SIP debugging for the Kamailio box: <--- SIP read from 10.100.10.44:5060 ---> PRACK sip:TNECLTSLY01.contoso.com:5068;transport=Tcp;maddr=10.100.10.41 SIP/2.0 FROM: <sip:9121;[email protected];user=phone>;epid=CF2380792B;tag=4852bab430 TO: <sip:[email protected];user=phone>;epid=CF2380792B;tag=3684a6a24e CSEQ: 24 PRACK CALL-ID: 192daae6-00e1-4140-bddd-0394b35d475b MAX-FORWARDS: 70 Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d VIA: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK159fc989 CONTACT: <sip:TNECLTSLY01.contoso.com:5068;transport=Tcp;maddr=10.100.10.41> CONTENT-LENGTH: 0 USER-AGENT: RTCC/4.0.0.0 MediationServer RAck: 1 23 INVITE <-------------> --- (12 headers 0 lines) --- Sending to 10.100.10.44 : 5060 (NAT) <--- Transmitting (NAT) to 10.100.10.44:5060 ---> SIP/2.0 481 Call leg/transaction does not exist Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d;received=10.100.10.44 Via: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK159fc989 From: <sip:9121;[email protected];user=phone>;epid=CF2380792B;tag=4852bab430 To: <sip:[email protected];user=phone>;epid=CF2380792B;tag=3684a6a24e Call-ID: 192daae6-00e1-4140-bddd-0394b35d475b CSeq: 24 PRACK User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Content-Length: 0 <------------> trixbox1*CLI> <--- SIP read from 10.100.10.44:5060 ---> ACK sip:[email protected];user=phone SIP/2.0 FROM: "John Jones"<sip:9121;[email protected];user=phone>;tag=4852bab430;epid=CF2380792B TO: <sip:[email protected];user=phone>;tag=3684a6a24e;epid=CF2380792B CSEQ: 23 ACK CALL-ID: 192daae6-00e1-4140-bddd-0394b35d475b MAX-FORWARDS: 70 Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d VIA: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK79a21c CONTENT-LENGTH: 0 My SIP trunk on the trixbox looks like this: [from-lync] exten => _+4XXX!,1,Noop(Stripping + from start of number) exten => _+4XXX!,n,Goto(from-internal,${EXTEN:1}) Though I am still having no luck getting the + stripped or the call to go through. Any ideas would be greatly appreciated. Thank you! -slashp

    Read the article

  • JavaScript To Clear Form Field On Submit Before Form Submission To Perl Script

    - by Russell C.
    We have a very long form that has a number of fields and 2 different submit buttons. When a user clicks the 1st submit button ("Photo Search") the form should POST and our script will do a search for matching photos based on what the user entered in the text input ("photo_search_text") next to the 1st submit button and reload the entire form with matching photos added to the form. Upon clicking the 2nd submit button ("Save Changes") at the end of the form, it should POST and our script should update the database with the information the user entered in the form. Unfortunately the layout of the form makes it impossible to separate it into 2 separate forms. I checked the entire form POST and unfortunately the submitted fields are identical to the perl script processing the form submission no matter which submit button is clicked so the perl script can't differentiate which action to perform based on which submit button is pushed. The only thing I can think of is to update the onclick action of the 2nd submit button so it clears the "photo_search_text" field before the form is submitted and then only perform a photo search if that fields has a value. Based on all this, my question is what does the JavaScript look that could clear out the "photo_search_text" field when someone clicks on the 2nd submit button? Here is what I've tried so far none of which has worked successfully: <input type="submit" name="submit" onclick="document.update-form.photo_search_text.value='';" value="Save Changes"> <input type="submit" name="submit" onsubmit="document.update-form.photo_search_text.value='';" value="Save Changes"> <input type="submit" name="submit" onclick="document.getElementById('photo_search_text')='';" value="Save Changes"> We also use JQuery on the site so if there is a way to do this with jQuery instead of plain JavaScript feel free to provide example code for that instead. Lastly, if there is another way to handle this that I'm not thinking of any and all suggestions would be welcome. Thanks in advance for your help!

    Read the article

  • Dynamic where clause using Linq to SQL in a join query in a MVC application

    - by jhoefnagels
    Dear .Net Linq experts, I am looking for a way to query for products in a catalog using filters on properties which have been assigned to the product based on the category to which the product belongs. So I have the following entities involved: Products -Id -CategoryId Categories [Id] Properties [Id, CategoryId] PropertyValues [Id, PropertyId] ProductProperties [ProductId, PropertyValueId] When I ad a product to the catalog, multiple ProductProperties will be added based on the category and I would like to be able to filter all products from a category by selecting values for one or more properties. I will gather all filters, which I will hold in a list, by reading the URL. Now it is time to actually get the products based on multiple properties and I have been trying to find the right strategy but untill now it does not really work. Is there a way to make this work without writing SQL? I was trying something like this: productsInCategory = ProductRepository.Where(p => p.Category.Name == category); foreach (PropertyFilter pf in filterList) { productsInCategory = (from product in productsInCategory join pp in ProductPropertyRepository on product.Id equals pp.ProductId where pp.PropertyValueId == pf.ValueId select product); }

    Read the article

  • Django Class Views and Reverse Urls

    - by kalhartt
    I have a good many class based views that use reverse(name, args) to find urls and pass this to templates. However, the problem is class based views must be instantiated before urlpatterns can be defined. This means the class is instantiated while urlpatterns is empty leading to reverse throwing errors. I've been working around this by passing lambda: reverse(name, args) to my templates but surely there is a better solution. As a simple example the following fails with exception: ImproperlyConfigured at xxxx The included urlconf mysite.urls doesn't have any patterns in it mysite.urls from mysite.views import MyClassView urlpatterns = patterns('', url(r'^$' MyClassView.as_view(), name='home') ) views.py class MyClassView(View): def get(self, request): home_url = reverse('home') return render_to_response('home.html', {'home_url':home_url}, context_instance=RequestContext(request)) home.html <p><a href={{ home_url }}>Home</a></p> I'm currently working around the problem by forcing reverse to run on template rendering by changing views.py to class MyClassView(View): def get(self, request): home_url = lambda: reverse('home') return render_to_response('home.html', {'home_url':home_url}, context_instance=RequestContext(request)) and it works, but this is really ugly and surely there is a better way. So is there a way to use reverse in class based views but avoid the cyclic dependency of urlpatterns requiring view requiring reverse requiring urlpatterns...

    Read the article

  • Low Latency Serial Communications In .Net

    - by bvillersjr
    I have been researching various third party libraries and approaches to low latency serial communications in .Net. I've read enough that I have now come full circle and know as little as I did when I started due to the variety of conflicting opinions. For example, the functionality in the Framework was ruled out due to some convincing articles stating: "that the Microsoft provided solution has not been stable across framework versions and is lacking in functionality." I have found articles bashing many of the older COM based libraries. I have found articles bashing the idea of a low latency .Net app as a whole due to garbage collection. I have also read articles demonstrating how P/Invoking Windows API functionality for the purpose of low latency communication is unacceptable. THIS RULES OUT JUST ABOUT ANY APPROACH I CAN THINK OF! I would really appreciate some words from those with been there / done that experience. Ideally, I could locate a solid library / partner and not have to build the communications library myself. I have the following simple objectives: Sustained low latency serial communication in C# / VB.Net 32/64 bit Well documented (if the solution is 3rd party) Relatively unimpacted (communication and latency wise) by garbage collection . Flexible (I have no idea what I will have to interface with in the future!) The only requirement that I have for certain is that I need to be able to interface with many different industrial devices such as RS485 based linear actuators, serial / microcontroller based gauges, and ModBus (also RS485) devices. Any comments, ideas, thoughts or links to articles that may iron out my confusion are much appreciated!

    Read the article

  • Antivirus and Content Management / Web Filtering

    - by Moif Murphy
    Hi, We currently use Net Intelligence for our Web Filtering and Antivirus. We're looking for an alternative and I was wondering what people here use? We have many mobile users who work away from the office and so we prefer something which includes a remote agent and a central management portal that we can customise and add our own rules etc. Also preferable is bundled anti virus. Any recommendations?

    Read the article

  • What font does Google Chrome address bar use?

    - by eSKay
    If it matters, here are the specifications of what I am using: Google Chrome 5.0.322.2 (Official Build 38810) unknown WebKit 533.1 V8 2.1.0.1 User Agent Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.1 (KHTML, like Gecko) Chrome/5.0.322.2 Safari/533.1 thanks!

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest. Edit: Had a look at Deoendency Walker and Process Explorer. Both great tools. Here is a image of the TCP connections for explorer.exe in Process Explorer http://img210.imageshack.us/img210/3563/61930284.gif

    Read the article

  • How to document mail setup after hand-over.

    - by BradyKelly
    I've just moved a client's email services over from my host to Google Apps. I would like to hand over a document providing all they (or their agent) need should I not be available etc. How are such documents normally structured, and what level of detail should they contain? I know user names and passwords are essential, and instructions on how to manage domains on Google Apps are over the top, but what is a commonly used middle ground?

    Read the article

  • Security considerations on Importing Bulk Data by Using BULK INSERT or OPENROWSET(BULK...)

    - by Ice
    I do not understand the following article profound. http://msdn.microsoft.com/en-us/library/ms175915(SQL.90).aspx "In contrast, if a SQL Server user logs on by using Windows Authentication, the user can read only those files that can be accessed by the user account, regardless of the security profile of the SQL Server process." What if i define a SQL-Agent Job to perform this bulk-Insert; Is it the OWNER of the Job who gives the security-context?

    Read the article

  • What is the PIXELFORMATDESCRIPTOR parameter in SetPixelFormat() used for?

    - by Mads Elvheim
    Usually when setting up OpenGL contexts, I've simply filled out a PIXELFORMATDESCRIPTOR structure with the necessary information and called ChoosePixelFormat(), followed by a call to SetPixelFormat() with the returned matching pixelformat from ChoosePixelFormat(). Then I've simply passed the initial descriptor without giving much thought of why. But now I use wglChoosePixelFormatARB() instead if ChoosePixelFormat() because I need some extended traits like sRGB and multisampling. It takes an attribute list of integers, just like XLib/GLX on Linux, not a PIXELFORMATDESCRIPTOR structure. So, do I really have to fill in a descriptor for SetPixelFormat() to use? What does SetPixelFormat() use the descriptor for when it already has the pixelformat descriptor index? Why do I have to specify the same pixelformat attributes in two different places? And which one takes precedence; the attribute list to wglChoosePixelFormatARB(), or the PIXELFORMATDESCRIPTOR attributes passed to SetPixelFormat()? Here are the function prototypes, to make the question more clear: /* Finds a best match based on a PIXELFORMATDESCRIPTOR, and returns the pixelformat index */ int ChoosePixelFormat(HDC hdc, const PIXELFORMATDESCRIPTOR *ppfd); /* Finds a best match based on an attribute list of integers and floats, and returns a list of indices of matches, with the best matches at the head. Also supports extended pixelformat traits like sRGB color space, floating-point framebuffers and multisampling. */ BOOL wglChoosePixelFormatARB(HDC hdc, const int *piAttribIList, const FLOAT *pfAttribFList, UINT nMaxFormats, int *piFormats, UINT *nNumFormats ); /* Sets the pixelformat based on the pixelformat index */ BOOL SetPixelFormat(HDC hdc, int iPixelFormat, const PIXELFORMATDESCRIPTOR *ppfd);

    Read the article

  • rails + sheevaplug = rails home development server and more

    - by microspino
    Hello I'd like to build a "Rails Brick" using a Sheevaplug from Marvell (O.S. is Ubuntu out of the box but You can install other distributions on It). It will be a home server and a silent, low cost (99$) low energy development machine. I'd like to add rails RVM, lot of gems, git-based heroku like deployment, passenger + nginx. This way I could have a portable server with a complete development environment and maybe I could find a hosting company where I can co-locate a grid of this devices or I can sell It as a simple little server for 10 or less users offices, with some centralized rails services (I think to a CMS, a BLOG, a WIKI, calendar or whatever this little jewel could afford). The usb port could make It a print server too or a UMTS link to the web via HUAWEI like usb UMTS keys. Can you give me some hint about: Is this project a crazy-close-to-failure idea? Why? which gem would You include? which rails open source app would you suggest? I have already an Excito Bubba Server at home, I saw the TonidoPlug so It came up in my mind to build something similiar but Rails based (Bubba is PHP based, TonidoPlug I don't know but It does not seems a Rails thing).

    Read the article

  • Unable to download .apk via webbrowser from drupal site

    - by ggrell
    I have a drupal-based website where people can log in and see private discussion forums. This is where I want to have my beta testers for my Android application download the beta .apk files. I tested this thoroughly on my Android 1.6 based myTouch 3G, and was able to log in, and download files attached to forum posts without problems. Now comes the interesting part: my testers on Droids and Nexus Ones (Android 2.0.1 and 2.1) were complaining that their downloads are failing. Since I don't have an 2.0 phone, I tried it out in a 2.0 emulator, and lo-and-behold, it didn't work. The download shows the indeterminate progress for a second or two, then shows "Download unsuccessful". Based on what I see in the logs, it is apparent that the server is returning a 404 for the download request from 2.0 browsers. I can download to my desktop and 1.6 phone no problem. The only reason I can think of that the server would return a 404 for a request is that for some reason the credentials or cookies aren't being passed by the download process. Logcat shows: http error 404 for download x Some background: I added the mime type to my .htaccess like this: AddType application/vnd.android.package-archive apk I checked the server logs and see the following for failed downloads: xx.xx.xx.224 - - [28/Jan/2010:20:39:00 -0500] "GET /system/files/grandmajong-beta090.apk HTTP/1.1" 404 - "http://trickybits.com/forums/beta-testing/grandma-jong/latest-version-090-b1" "Mozilla/5.0 (Linux; U; Android 1.6; en-us; sdk Build/Donut) AppleWebKit/528.5+ (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1"

    Read the article

  • Detecting Xml namespace fast

    - by Anna Tjsoken
    Hello there, This may be a very trivial problem I'm trying to solve, but I'm sure there's a better way of doing it. So please go easy on me. I have a bunch of XSD files that are internal to our application, we have about 20-30 Xml files that implement datasets based off those XSDs. Some Xml files are small (<100Kb), others are about 3-4Mb with a few being over 10Mb. I need to find a way of working out what namespace these Xml files are in order to provide (something like) intellisense based off the XSD. The implementation of this is not an issue - another developer has written the code for this. But I'm not sure the best (and fastest!) way of detecting the namespace is without the use of XmlDocument (which does a full parse). I'm using C# 3.5 and the documents come through as a Stream (some are remote files). All the files are *.xml (I can detect if it was extension based) but unfortunately the Xml namespace is the only way. Right now I've tried XmlDocument but I've found it to be innefficient and slow as the larger documents are awaiting to be parsed (even the 100Kb docs). public string GetNamespaceForDocument(Stream document); Something like the above is my method signature - overloads include string for "content". Would a RegEx (compiled) pattern be good? How does Visual Studio manage this so efficiently? Another college has told me to find a fast Xml parser in C/C++, parse the content and have a stub that gives back the namespace as its slower in .NET, is this a good idea?

    Read the article

  • apache - virtual host logging

    - by imaginative
    I have a virtualhost setup with usecanonicalname off. I have ServerName domain.com set and ServerAlias *.domain.com in the virtualhost. Using apache2's %v LogFormat string will only capture domain.com, and I'm trying to get it to capture foo.domain.com so I can split logs accordingly. LogFormat I'm currently using is LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined

    Read the article

  • SharePoint 2007 and SiteMinder

    - by pborovik
    Here is a question regarding some details how SiteMinder secures access to the SharePoint 2007. I've read a bunch of materials regarding this and have some picture for SharePoint 2010 FBA claims-based + SiteMinder security (can be wrong here, of course): SiteMinder is registered as a trusted identity provider for the SharePoint; It means (to my mind) that SharePoint has no need to go into all those user directories like AD, RDBMS or whatever to create a record for user being granted access to SharePoint - instead it consumes a claims-based id supplied by SiteMinder SiteMinder checks all requests to SharePoint resources and starts login sequence via SiteMinder if does not find required headers in the request (SMSESSION, etc.) SiteMinder creates a GenericIdentity with the user login name if headers are OK, so SharePoint recognizes the user as authenticated But in the case of SharePoint 2007 with FBA + SiteMinder, I cannot find an answer for questions like: Does SharePoint need to go to all those user directories like AD to know something about users (as SiteMinder is not in charge of providing user info like claims-based ids)? So, SharePoint admin should configure SharePoint FBA to talk to these sources? Let's say I'm talking to a Web Service of SharePoint protected by SiteMinder. Shall I make a Authentication.asmx-Login call to create a authentication ticket or this schema is somehow changed by the SiteMinder? If such call is needed, do I also need a SiteMinder authentication sequence? What prevents me from rewriting request headers (say, manually in Fiddler) before posting request to the SharePoint protected by SiteMinder to override its defence? Pity, but I do not have access to deployed SiteMinder + SharePoint, so need to investigate some question blindly. Thanks.

    Read the article

  • Workflow engine BPMN, Drools, etc or ESB?

    - by Tom
    We currently have an application that is based on an in-house developed workflow engine with YAML based DSL. We are looking to move parts of it to Java. I have discovered a number of java solutions like Intalio, JBPM, Drools Expert, Drools Flow etc. They appear to be aimed at businesses where the business analyst creates the workflows using a graphical editor and submits them to the workflow engine. They seem geared towards ease of use for non-technical people rather than for developers with a focus on human interaction. The workflows tend to look like. Discover-a-file -\ -> join -> process-file -> move-file -> register-file Discover-some-metadata -/ If any step fails we need to retry it X times. We also need to be able to stop the system and be able to restart it and have it continue from where it was (durable). Some of our workflows can be defined by a set of goals we need to achieve so Jess's backwards rule chaining sounds interesting but it is not open source. It might be that what we are after is a Finite State Machine engine or just an Enterprise Service Bus and do everything as JMS queues. Is there a good open source workflow engine that is both standards-based but also geared towards developers. We don't particular want to use a graphical workflow designer or write reams of XML and it should ideally be in Java or language agnostic (makes REST/Soap calls to external services). Thanks, Tom

    Read the article

  • Implementing the ‘defer’ statement from Go in Objective-C?

    - by zoul
    Hello! Today I read about the defer statement in the Go language: A defer statement pushes a function call onto a list. The list of saved calls is executed after the surrounding function returns. Defer is commonly used to simplify functions that perform various clean-up actions. I thought it would be fun to implement something like this in Objective-C. Do you have some idea how to do it? I thought about dispatch finalizers, autoreleased objects and C++ destructors. Autoreleased objects: @interface Defer : NSObject {} + (id) withCode: (dispatch_block_t) block; @end @implementation Defer - (void) dealloc { block(); [super dealloc]; } @end #define defer(__x) [Defer withCode:^{__x}] - (void) function { defer(NSLog(@"Done")); … } Autoreleased objects seem like the only solution that would last at least to the end of the function, as the other solutions would trigger when the current scope ends. On the other hand they could stay in the memory much longer, which would be asking for trouble. Dispatch finalizers were my first thought, because blocks live on the stack and therefore I could easily make something execute when the stack unrolls. But after a peek in the documentation it doesn’t look like I can attach a simple “destructor” function to a block, can I? C++ destructors are about the same thing, I would create a stack-based object with a block to be executed when the destructor runs. This would have the ugly disadvantage of turning the plain .m files into Objective-C++? I don’t really think about using this stuff in production, I’m just interested in various solutions. Can you come up with something working, without obvious disadvantages? Both scope-based and function-based solutions would be interesting.

    Read the article

  • PHP, We have sessions, and cookies....I love cookies, but they are blowing my mind right now.

    - by Matt
    I am not sure how to go about accessing the variable I need to set on a cookie... I was thinking about using the $_POST global but I dont know how based on my design if it will work. I am using a master page type design seperating index.php from my function includes and database information and individual pages (that will be returned to an include in index.php based on a $_GET) Okay so back to my question. What is the most efficient way to set a cookie on a design that has a main page that everything will branch from. How would I pull the value. Is $_POST a good enough way to go about it? Also...by saying it must be the first thing sent...does that mean I cannot run any serverside scripts before that? I could definately utilize a login query I think but I dont want to write code just to be dissapointed based on my lack of time and knowledge. I did search for answers...I know this most likely feels like a generic question that could be answered in a difference place...but I know I will get an accurate and professional answer here...so I dont want to bet on the half answers I found otherwise. Of course I will sanitize everything and not store any sensitive information (passwords,address,phone,or anything really for that matter besides some kind of session ID and the username) If this is confusing I am sorry but I am on a gov computer...and they lock these tighter than ft knox...so getting my code on here will be a chore until I get back to my room. Thanks, Matt

    Read the article

  • How should I go about implementing a points-to analysis in Maude?

    - by reprogrammer
    I'm going to implement a points-to analysis algorithm. I'd like to implement this analysis mainly based on the algorithm by Whaley and Lam. Whaley and Lam use a BDD based implementation of Datalog to represent and compute the points-to analysis relations. The following lists some of the relations that are used in a typical points-to analysis. Note that D(w, z) :- A(w, x),B(x, y), C(y, z) means D(w, z) is true if A(w, x), B(x, y), and C(y, z) are all true. BDD is the data structure used to represent these relations. Relations input vP0 (variable : V, heap : H) input store (base : V, field : F, source : V) input load (base : V, field : F, dest : V) input assign (dest : V, source : V) output vP (variable : V, heap : H) output hP (base : H, field : F, target : H) Rules vP(v, h) :- vP0(v, h) vP(v1, h) :- assign(v1, v2), vP(v2, h) hP(h1, f,h2) :- store(v1, f, v2), vP(v1, h1), vP(v2, h2) vP(v2, h2) :- load(v1, f, v2), vP(v1, h1), hP(h1, f, h2) I need to understand if Maude is a good environment for implementing points-to analysis. I noticed that Maude uses a BDD library called BuDDy. But, it looks like that Maude uses BDDs for a different purpose, i.e. unification. So, I thought I might be able to use Maude instead of a Datalog engine to compute the relations of my points-to analysis. I assume Maude propagates independent information concurrently. And this concurrency could potentially make my points-to analysis faster than sequential processing of rules. But, I don't know the best way to represent my relations in Maude. Should I implement BDD in Maude myself, or Maude's internal unification based on BDD has the same effect?

    Read the article

  • Portlet container like pluto or jetspeed on google app engine?

    - by Patrick Cornelissen
    I am trying to build something "portlet server"-ish on the google app engine. (as open source) I'd like to use the JSR168/286 standards, but I think that the restrictions of the app engine will make it somewhere between tricky and impossible. Has anyone tried to run jetspeed or an application that uses pluto internally on the google app engine? Based on my current knowledge of portlets and the google app engine I'm anticipating these problems: A war file with portlets is from the deployment standpoint more or less a complete webapp (yes, I know that it doesn't really work without a portal server). The war file may contain it's own web.xml etc. This makes deployment on the app engine rather difficult, because the apps are not visible to each other, so all portlet containing archives need to be included in the war file of the deployed "app engine based portal server". The "portlets" are (at least in liferay) started as permanent servlet processes, based on their portlet.xmls and web.xmls which is located in the same spot for every portlet archive that is loaded. I think this may be problematic in the app engine, because everything is in one big "web app", so it may be tricky to access the portlet.xmls from each archive. This prevents a 100% compatibility in my opinion. Is here anyone who has any experience with the combination of portlets and the app engine? Do you think it's feasible to modify jetspeed, pluto or any other portlet container to be able to run it on the app engine?

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >