Search Results

Search found 28303 results on 1133 pages for 'multi site'.

Page 54/1133 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • DNS Does Not Register at Off-site Locations

    - by Russ Warren
    First of all, let me give you the specifics of our setup: Windows Small Business Server 2008 Domain w/ all applicable updates on the DC The DC does DHCP for the main site The DC does DNS for all sites 3 sites including our headquarters where the DC is located All sites are connected through OpenVPN SSL tunnels terminated by an Untangle box at each site The 2 remote sites us the Untangle box as a DHCP server for their subnet, which assigns the DC as the primary DNS server Collection of Windows XP and Windows 7 workstations connected to the domain Here's the issue: All of the workstations at the main site register with the DNS server on the domain controller fine. As they grab an IP from the DHCP server, it updates the DNS server with the new host record. I have 2 systems (each at different remote sites) that fail to register with the DNS server. I've attempted the following troubleshooting steps: Confirmed the network adapter is using the DC as a DNS server Confirmed 2-way traffic is possible between DC and workstation Verified the "Register with DNS server" setting was checked in the adapter properties Attempted ipconfig /registerdns and received no errors For the time being, I have setup a DHCP reservation for these systems and manually created a host record. This seems to work fine, but I need a solution for any new systems that go out there.

    Read the article

  • MySQL encoding problem after site move

    - by Quan Zhou
    Guys, I need your help. Since last month my friend has lost his database on Dreamhost, he decided to move his wordpress based blog site (written in Chinese) to my server. He's using a wp-plugin called wp-db-backup to perform regular db backups. And the servers backgrounds are: Dreamhost: Linux 2.6.31.5-modsign-aufs2-grsec-2-opt mysql Ver 14.12 Distrib 5.0.16, for pc-linux-gnu (i386) using readline 5.0 apache2 unknown version My Server: Linux li159-46 2.6.32.12-x86_64-linode12 mysql Ver 14.14 Distrib 5.1.45, for debian-linux-gnu (x86_64) using readline 6.1 nginx 0.8.36 His site's encoding was UTF-8 in both wp-config and db. I imported his db backup file in UTF-8 by default, then I sync'd files using rsync from dreamhost, then I just changed the db address and nothing more. But when I take first look at the "new" site, it was full of unreadable characters, I met this problem before, I changed charset options in browser but none of them can make it displayed properly. Then I converted his db to GB18030, it works with only if browser set charset to GB18030 either GBK, but by default they recognize the charset as UTF-8. I tried to edit the headers but it doesn't work. What could I do now? Thx~~

    Read the article

  • Referencing groups/classes from Puppet dashboard in my site manifest

    - by Banjer
    I'm using Puppet Dashboard as my ENC and I'm not sure how to reference or use class and group classifications from /etc/puppet/manifests/site.pp. I have two groups defined in the dashboard: CentOS6 and SLES11. What should my site.pp look like if I want to include a certain list of modules in the CentOS6 group and a certain list of modules in the SLES11 group? I'm trying to do something like this: # /etc/puppet/manifests/site.pp node basenode { include hosts include ssh::server include ssh::client include authentication include sudo include syslog include mail } node 'CentOS6' inherits basenode { include profile } node 'SLES11' inherits basenode { include usrmounts } I have OS-specific case statements within my modules, but there are some modules that will only be applied to a certain distro. So I suppose I have two questions: Is this the best way to apply modules/resources in an OS-specific manner? Or does the above make you want to vomit? Regardless of #1, I'm still curious as how to reference classes, groups, and nodes from Dashboard within my manifests. I've read the External Nodes doc, but I'm not seeing how they correspond to manifests. Thanks all.

    Read the article

  • IIS 8 URL Redirect on site level

    - by jackncoke
    I am trying to do a simple 301 perm redirect to another url in IIS 8. The end results would be if i navigated to domain2.com i would end up on domain1.com. We are moving from IIS 6 to a new server and have aprox 600+ sites that will be configured on this IIS 8 box. All of these sites run a property CMS and are looking at the same directory for source code. In IIS 6 i would just go to the Home directory tab of each site and check the box that says "Permanent Redirect" and provide a URL. With IIS 8 there is "HTTP Redirect" and this looks like it would do the trick but it is being applied to all the sites in IIS 8. Not on the site level like it use to be in IIS 6. I also looked into URL Rewriting module for IIS 8 but it seems to take rules in the style of a firewall and i am not sure if i could effectly create rules that would cater to 600+ sites. I am looking for the easiest way to have redirects on my site level so that that customers with multiple domains can have there sites redirect to there main domain for seo purposes. I feel like this was so easily achieved in IIS 6 that i must be overlooking something in the new version.

    Read the article

  • Plone site randomly serving wrong content

    - by Chris Miller
    I have a Plone site that has begun to randomly serve up the wrong content. Any given content suddenly shows something else. Sometimes a JPEG loads a stylesheet instead or a stylesheet loads as a page or a page as an image. The images move around, some times our site logo shows a bullet, or one of the other site images. Fiddler shows the wrong content in the response, the apache logs show the content type of the incorrect file (so if the an image loads in place of a style sheet, apache shows that). We thought mod_proxy was the source of our grief, but we get the problem hitting Zope directly. I never get the wrong content using the Medusa Monitor to repeatedly hit the content. I do see ConflictErrors in the instance.log file, and they seem to be correlated to the problem, but not 100%. ZPublisher.Conflict ConflictError at \path\to\object: database conflict error (oid 0x3586, class BTrees._OIBTree.OIBTree, serial this txn started with blah, serial currently committed blah) (X conflicts (0 unresolved) since startup blah) I pulled that off the web, it's not from our logs, but it's the same message. This may be a red herring, it sounds like those messages are normal. We've updated to the 3.3.5, same problems. I'm at a loss. I'm wondering if there a good way to intercept what is being served? Secondly, is there a way to increase the verbosity of the access log to included the content-type? I've even seen the problem manifest in ZMI. It happens more often when we're authenticated. Sometimes it can take a thousand reloads to see the problem, other times it happens in different ways every time we reload. I believe we've seen this problem for a couple years, but it was very intermittent, a page would show the content of a GIF, then a reload later wouldn't happen for a long time. Now it's a huge problem.

    Read the article

  • Multi-WAN bonding across different media

    - by Tom O'Connor
    I've recently been thinking again about a product that Viprinet provide, basically they've got a pair of routers, one that lives in a datacentre, Their VPN Multichannel Hub and the on-site hardware, their VPN multichannel routers They've also got a bunch of interface cards (like HWICs) for 3G, UMTS, Ethernet, ADSL and ISDN adapters. Their main spiel seems to be bonding across different media. It's something that I'd really like to use for a couple of projects, but their pricing is really quite extreme, the hub is about 1-2k, the routers are 2-6k, and the interface modules are 200-600 each. So, what I'd like to know is, is it possible with a couple of stock Cisco routers, 28xx or 18xx series, to do something similar, and basically connect a bunch of different WAN ports, but have it all presented neatly as one channel back to the internet, with seamless (or nearly) failover if one of the WAN interfaces should fail. Basically, If i got 3x 3G to ethernet modems, and each on a different network, I'd like to be able to loadbalance/bond across all of them, without having to pay Viprinet for the privilege. Does anyone know how I'd go about configuring something for myself, based around standard protocols (or vendor specific ones), but without actually having to buy the Viprinet hardware?

    Read the article

  • Configure IIS site to work with host header & hosts file entry

    - by HarveySaayman
    I'm I bit of an IIS / Web noob (I'm a C# backend service / winforms dev) so please bare with me :-) I've set up a site in IIS on my local dev machine. In the bindings section of the site ive added 4 bindings, all 4 for http: Host Name Port IP Address blog.sourcecube.co.za 26581 * www.blog.sourcecube.co.za 26581 * blog.sourcecube.co.za 26581 127.0.0.1 www.blog.sourcecube.co.za 26581 127.0.0.1 in my hosts file (drivers\etc\hosts), i've added the folling entries: 127.0.0.1 blog.sourcecube.co.za 127.0.0.1 www.blog.sourcecube.co.za when i ping my domain name from the command line it does in fact resolve to the loopback address, 127.0.0.1. So what I'm expecting to happen when i navigate to blog.sourcecube.co.za in my browser is for it to resolve to 127.0.0.1, and when the request hits IIS, it should know which site to serve because of the host header? But when i navigate to blog.sourcecube.co.za, i get an "Unable to connect, Firefox can't establish a connection to the server at blog.sourcecube.co.za" error. What am I doing wrong? --- UPDATE --- Navigating to blog.sourcecube.co.za:26581 from my browser works... I'd like get it working without specifying the port number though.

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • Getting 404 error on MVC web-site

    - by RB
    I have an IIS7.5 web-site, on Windows Server 2008, with an ASP.NET MVC2 web-site deployed to it. The website was built in Visual Studio 2008, targeting .NET 3.5, and IIS 5.1 has been successfully configured to run it as well, for local testing. We've installed the world's simplest MVC application (the one which is created when you create a new MVC2 project in Visual Studio), and we are getting 404s on any page we try and access - e.g. <my_server>/Home/About will generate a 404. I've asked this question on StackOverflow as well, but that was before I knew it was a server issue. I have checked the following things: There are 404 entries in the IIS log, corresponding to each request. The application pool for the web-site is set to use the Integrated pipeline. The "customErrors" mode is set to off. .NET 3.5 SP1 is installed ASP.NET MVC 2 is installed I've used MVC Diagnostics to confirm all MVC DLLs are being found. ASP.NET is enabled in IIS, which we've demonstrated by running the MVC Diagnostics page. KB 2023146 did highlight that HTTP Redirection was off, so we've turned it on, but no joy. Any ideas will be greatly appreciated! Someone did suggest that there might be problems running it caused by Windows Server 2008 being 64-bit - does anyone know anything about this?

    Read the article

  • Steps to debugging web site latency and timeout issues

    - by Paperjam
    I have a client who has multiple offices around the country, all of which share the same Internet connection via their WAN. One specific office for this client is experiencing severe latency and timeout issues with my web site. Most, but not all, of the latency occurs on a specific ASPX page where multiple postbacks are made while populating cascading dropdown lists (rapid form submits). The latency is sporadic and can be anywhere from a few seconds to a full timeout. There is no indication that the timeouts are occurring on the server's end. The IT guy for this client is having trouble narrowing down the problem. Since it is affecting only one location for one client, I am led to believe it is not something with my site but something specific to that location. He's measured ping times while using the site and has noticed no real variance in ping times even when the page has timed out. I believe this may be being caused by some sort of Internet filter that doesn't like rapid form submits, but beyond a hunch I haven't a clue. My question is what things should I tell the IT guy to look for? While I'm not trying to provide active tech support for this issue, I would at least like to glean an understanding of what is going on and try to offer some sort of advice. Thank you.

    Read the article

  • Localhost stop resolving/serving local site after a few clicks IIS 7.5

    - by Jo-Pierre
    previously I have searched tried to find the answer from a previous question, however Im not sure it was resolved. I could comment on the question to find more, so decided to post a new question. Previous question found (http://serverfault.com/questions/314333/localhost-stop-resolving-after-a-few-minutes-iis-7-5) So I set up a new website on Windows 7 IIS 7.5 ... I give it a host header and in the hosts file I add the entry for 127.0.0.1 and browse the site. After about the second or third time of trying to click around on the local site, it starts hanging ... just seems to be looking like its trying to load, but just eventually comes back in Firefox with "The connection was reset" (takes about 30-50 secs before this happens). I then used a program like CurrPorts, to view the ports that are listening, and for the initial request it all seems good. Now after the site is hanging, I dont see the hit coming through anymore. Its as if the browser loses touch with IIS or something. If I open a different browser, works fine for about 2 clicks or so, then same problem. Anyone know what could be causing this? Or how to resolve?

    Read the article

  • Searching SharePoint site with Windows Explorer

    - by alexsome
    Every week, I manually backup recent versions of the files on my group's SharePoint site. I open the library in Windows Explorer, search for all files modified in the past week, then copy and paste them to a network location. We need this process because our SharePoint site has a quota that we would easily meet if we had unlimited versions, so we keep a history of older versions on the network. Recently I got an upgrade to my work computer and I am unable to search the site using Windows Explorer. When I run the search for files modified in the last week no results are returned. If I run a search with no criteria on the file library, all the files are found but the "modified on" field is blank. So the search results only have the file and type fields. The new computer has Windows XP, just like the old one did. I hope this makes sense. Does anyone have any clue what the problem could be? I'd be happy to provide more info if necessary. It's bugging me to no end and I'm not even sure where to begin looking - it's either a trivial issue or a very obscure one. Thanks a lot.

    Read the article

  • SharePoint 2007 Object Model: How can I make a new site collection, move the original main site to b

    - by program247365
    Here's my current setup: one site collection on a SharePoint 2007 (MOSS Enterprise) box (32 GB total in size) one main site with many subsites (mostly created from the team site template, if that matters) that is part of the one site collection on the box What I'm trying to do*: *If there is a better order, or method for the following, I'm open to changing it Create a new site collection, with a main default site, on same SP instance (this is done, easy to do in SP Object Model) Move rootweb (a) to be a subsite in the new location, under the main site Current structure: rootweb (a) \ many sub sites (sub a) What new structure should look like: newrootweb(b) \ oldrootweb (a) \ old many sub sites (sub a) Here's my code for step #2: Notes: * SPImport in the object model under SharePoint.Administration, is what is being used here * This code currently errors out with "Object reference not an instance of an object", when it fires the error event handler using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using Microsoft.SharePoint.Deployment; public static bool FullImport(string baseFilename, bool CommandLineVerbose, bool bfileCompression, string fileLocation, bool HaltOnNonfatalError, bool HaltOnWarning, bool IgnoreWebParts, string LogFilePath, string destinationUrl) { #region my try at import string message = string.Empty; bool bSuccess = false; try { SPImportSettings settings = new SPImportSettings(); settings.BaseFileName = baseFilename; settings.CommandLineVerbose = CommandLineVerbose; settings.FileCompression = bfileCompression; settings.FileLocation = fileLocation; settings.HaltOnNonfatalError = HaltOnNonfatalError; settings.HaltOnWarning = HaltOnWarning; settings.IgnoreWebParts = IgnoreWebParts; settings.IncludeSecurity = SPIncludeSecurity.All; settings.LogFilePath = fileLocation; settings.WebUrl = destinationUrl; settings.SuppressAfterEvents = true; settings.UpdateVersions = SPUpdateVersions.Append; settings.UserInfoDateTime = SPImportUserInfoDateTimeOption.ImportAll; SPImport import = new SPImport(settings); import.Started += delegate(System.Object o, SPDeploymentEventArgs e) { //started message = "Current Status: " + e.Status.ToString() + " " + e.ObjectsProcessed.ToString() + " of " + e.ObjectsTotal + " objects processed thus far."; message = e.Status.ToString(); }; import.Completed += delegate(System.Object o, SPDeploymentEventArgs e) { //done message = "Current Status: " + e.Status.ToString() + " " + e.ObjectsProcessed.ToString() + " of " + e.ObjectsTotal + " objects processed."; }; import.Error += delegate(System.Object o, SPDeploymentErrorEventArgs e) { //broken message = "Error Message: " + e.ErrorMessage.ToString() + " Error Type: " + e.ErrorType + " Error Recommendation: " + e.Recommendation + " Deployment Object: " + e.DeploymentObject.ToString(); System.Console.WriteLine("Error"); }; import.ProgressUpdated += delegate(System.Object o, SPDeploymentEventArgs e) { //something happened message = "Current Status: " + e.Status.ToString() + " " + e.ObjectsProcessed.ToString() + " of " + e.ObjectsTotal + " objects processed thus far."; }; import.Run(); bSuccess = true; } catch (Exception ex) { bSuccess = false; message = string.Format("Error: The site collection '{0}' could not be imported. The message was '{1}'. And the stacktrace was '{2}'", destinationUrl, ex.Message, ex.StackTrace); } #endregion return bSuccess; } Here is the code calling the above method: [TestMethod] public void MOSS07_ObjectModel_ImportSiteCollection() { bool bSuccess = ObjectModelManager.MOSS07.Deployment.SiteCollection.FullImport("SiteCollBAckup.cmp", true, true, @"C:\SPBACKUP\SPExports", false, false, false, @"C:\SPBACKUP\SPExports", "http://spinstancename/TestImport"); Assert.IsTrue(bSuccess); }

    Read the article

  • nginx + apache subdomain redirection fault

    - by webwolf
    i really need your advice folks since i'm experiencing some troubles with nginx & apache2 subdomains configs first of all, there's a site (say, site.com) and two subdomains (links.site.com and shop.site.com) whose files are physically located at the same level of FS hierarchy as the site.com itself my hoster has configured both apache and nginx by my request, but it still doesn't work as it used to both of subdomains point to the main page of site.com for some unknown and implicit (for me) reason :( my assumption is that's happen because site.com record is placed first in both configs?!.. please help me solve this out! every opinion would be appreciated =) nginx.conf: server { listen 95.169.187.234:80; server_name site.com www.site.com ; access_log /home/www/site.com/logs/nginx.access.log main; location ~* ^.+\.(jpeg|jpg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|swf|avi|mp3|mpg|mpeg|asf|vmw)$ { expires 30d; root /home/www/site.com/www; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location / { set $referer $http_referer; proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Referer $referer; proxy_set_header Host $host; client_max_body_size 10m; client_body_buffer_size 64k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } server { listen 95.169.187.234:80; server_name links.site.com www.links.site.com ; access_log /home/www/links.site.com/logs/nginx.access.log main; location ~* ^.+\.(jpeg|jpg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|swf|avi|mp3|mpg|mpeg|asf|vmw)$ { expires 30d; root /home/www/links.site.com/www; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location / { set $referer $http_referer; proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Referer $referer; proxy_set_header Host $host; client_max_body_size 10m; client_body_buffer_size 64k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } server { listen 95.169.187.234:80; server_name shop.site.com www.shop.site.com ; access_log /home/www/shop.site.com/logs/nginx.access.log main; location ~* ^.+\.(jpeg|jpg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|swf|avi|mp3|mpg|mpeg|asf|vmw)$ { expires 30d; root /home/www/shop.site.com/www; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location / { set $referer $http_referer; proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Referer $referer; proxy_set_header Host $host; client_max_body_size 10m; client_body_buffer_size 64k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } httpd.conf: # ServerRoot "/usr/local/apache2" PidFile /var/run/httpd.pid Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 15 Listen 127.0.0.1:8080 NameVirtualHost 127.0.0.1:8080 ... #Listen *:80 NameVirtualHost *:80 ServerName www.site.com ServerAlias site.com UseCanonicalName Off CustomLog /home/www/site.com/logs/custom_log combined ErrorLog /home/www/site.com/logs/error_log DocumentRoot /home/www/site.com/www AllowOverride All Options +FollowSymLinks Options -MultiViews Options -Indexes Options Includes Order allow,deny Allow from all DirectoryIndex index.html index.htm index.php ServerName www.links.site.com ServerAlias links.site.com UseCanonicalName Off CustomLog /home/www/links.site.com/logs/custom_log combined ErrorLog /home/www/links.site.com/logs/error_log DocumentRoot /home/www/links.site.com/www AllowOverride All Options +FollowSymLinks Options -MultiViews Options -Indexes Options Includes Order allow,deny Allow from all DirectoryIndex index.html index.htm index.php ServerName www.shop.site.com ServerAlias shop.site.com UseCanonicalName Off CustomLog /home/www/shop.site.com/logs/custom_log combined ErrorLog /home/www/shop.site.com/logs/error_log DocumentRoot /home/www/shop.site.com/www AllowOverride All Options +FollowSymLinks Options -MultiViews Options -Indexes Options Includes Order allow,deny Allow from all DirectoryIndex index.html index.htm index.php # if DSO load module first: LoadModule rpaf_module modules/mod_rpaf-2.0.so RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 RPAFheader X-Forwarded-For Include conf/virthost/*.conf

    Read the article

  • node.js on multi-core machines

    - by zaharpopov
    node.js looks interesting BUT... I must miss something - isn't node.js tuned only to run on a single process & thread? Then how does it scale for multi-core CPUs and multi-CPU servers? After all, it is all great to make fast as possible single-thread server, but for high loads I would want to use several CPUs. And the same goes for making applications faster - seems today the way is use multiple CPUs and parallelize the tasks. How does node.js fit into this picture? Is its idea to somehow distribute multiple instances or what?

    Read the article

  • Create multi-part message in MIME format Freemarker template

    - by Mat Banik
    How do you create email message that contains text and HTML version for the same content? Of course I would like to know how to setup the freemarker template or the header of the message that will be send. When I look on the source of message multi-part message in MIME format that I receive in inbox every once in while this is what is in there: This is a multi-part message in MIME format. ------=_NextPart_000_B10D_01CBAAA8.F29DB300 Content-Type: text/plain Content-Transfer-Encoding: 7bit ...Text here... ------=_NextPart_000_B10D_01CBAAA8.F29DB300 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html><body> html code here ... </body></html>

    Read the article

  • Question on multi-probe Local Sensitive Hashing

    - by Yijinsei
    Hey guys sorry to be asking this kind noob question, but because I really need some guidance on how to use Multi probe LSH pretty urgently, so I did not do much research myself. I realize there is a lib call LSHKIT available that implemented that algorithm, but I have trouble trying to figure out how to use it. Right now, I have a few thousand feature vector 296 dimension, each representing an image. The vector is used to query an user input image, to retrieve the most similar image. The method I used to derive the distance between vector is euclidean distance. I know this might be a rather noob question, but do you guys have knowledge on how should i implement multi probe LSH? I am really very grateful to any answer or response.

    Read the article

  • Multi-part gzip file random access (in Java)

    - by toluju
    This may fall in the realm of "not really feasible" or "not really worth the effort" but here goes. I'm trying to randomly access records stored inside a multi-part gzip file. Specifically, the files I'm interested in are compressed Heretrix Arc files. (In case you aren't familiar with multi-part gzip files, the gzip spec allows multiple gzip streams to be concatenated in a single gzip file. They do not share any dictionary information, it is simple binary appending.) I'm thinking it should be possible to do this by seeking to a certain offset within the file, then scan for the gzip magic header bytes (i.e. 0x1f8b, as per the RFC), and attempt to read the gzip stream from the following bytes. The problem with this approach is that those same bytes can appear inside the actual data as well, so seeking for those bytes can lead to an invalid position to start reading a gzip stream from. Is there a better way to handle random access, given that the record offsets aren't known a priori?

    Read the article

  • why sitecolumn title is blank after feature activation....

    - by Rosh Malai
    After installing using stsadmin, and activating using site UI "Site Collection Feature" these columns shows up but no title is showing for them. So no link is attached. It looks like this. Site Column Type Source Car Custom Columns Single line of text SiteCol1 Single line of text SiteCol1 Single line of text SiteCol1 Date and Time SiteCol1 here is feature.xml here is elements.xml StaticName="Car_VinNumber" xmlns="http://schemas.microsoft.com/sharepoint/"

    Read the article

  • Screening (multi)collinearity in a regression model

    - by aL3xa
    I hope that this one is not going to be "ask-and-answer" question... here goes: (multi)collinearity refers to extremely high correlations between predictors in the regression model. How to cure them... well, sometimes you don't need to "cure" collinearity, since it doesn't affect regression model itself, but interpretation of an effect of individual predictors. One way to spot collinearity is to put each predictor as a dependent variable, and other predictors as independent variables, determine R2, and if it's larger than .9 (or .95), we can consider predictor redundant. This is one "method"... what about other approaches? Some of them are time consuming, like excluding predictors from model and watching for b-coefficient changes - they should be noticeably different. Of course, we must always bare in mind specific context/goal of analysis... Sometimes, only remedy is to repeat a research, but right now, I'm interested in various ways of screening redundant predictors when (multi)collinearity occurs in a regression model.

    Read the article

  • Screening (multi)collinearity in a reggresion model

    - by aL3xa
    I hope that this one is not going to be "ask-and-answer" question... here goes: (multi)collinearity refers to extremely high correlations between predictors in the regression model. How to cure them... well, sometimes you don't need to "cure" collinearity, since it doesn't affect regression model itself, but interpretation of an effect of individual predictors. One way to spot collinearity is to put each predictor as a dependent variable, and other predictors as independent variables, determine R2, and if it's larger than .9 (or .95), we can consider predictor redundant. This is one "method"... what about other approaches? Some of them are time consuming, like excluding predictors from model and watching for b-coefficient changes - they should be noticeably different. Of course, we must always bare in mind specific context/goal of analysis... Sometimes, only remedy is to repeat a research, but right now, I'm interested in various ways of screening redundant predictors when (multi)collinearity occurs in a regression model.

    Read the article

  • Multi-tab application (C#)

    - by Zach
    Hi, I'm creating a multi-tabbed .NET application that allows the user to dynamically add and remove tabs at runtime. When a new tab is added, a control is added to it (as a child), in which the contents can be edited (eg. a text box). The user can perform tasks on the currently visible text box using a toolbar/menu bar. To better explain this, look at the picture below to see an example of what I want to accomplish. It's just a mock-up, so it doesn't actually work that way, but it shows what I want to get done. Essentially, like a multi-tabbed Notepad. View the image here: http://picasion.com/pic15/324b466729e42a74b9632c1473355d3b.gif Is this possible in .NET? I'm pretty sure it is, I'm just looking for a way that it can be implemented.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >