Search Results

Search found 24132 results on 966 pages for 'non clustered index'.

Page 185/966 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • nginx + apache subdomain redirection fault

    - by webwolf
    i really need your advice folks since i'm experiencing some troubles with nginx & apache2 subdomains configs first of all, there's a site (say, site.com) and two subdomains (links.site.com and shop.site.com) whose files are physically located at the same level of FS hierarchy as the site.com itself my hoster has configured both apache and nginx by my request, but it still doesn't work as it used to both of subdomains point to the main page of site.com for some unknown and implicit (for me) reason :( my assumption is that's happen because site.com record is placed first in both configs?!.. please help me solve this out! every opinion would be appreciated =) nginx.conf: server { listen 95.169.187.234:80; server_name site.com www.site.com ; access_log /home/www/site.com/logs/nginx.access.log main; location ~* ^.+\.(jpeg|jpg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|swf|avi|mp3|mpg|mpeg|asf|vmw)$ { expires 30d; root /home/www/site.com/www; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location / { set $referer $http_referer; proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Referer $referer; proxy_set_header Host $host; client_max_body_size 10m; client_body_buffer_size 64k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } server { listen 95.169.187.234:80; server_name links.site.com www.links.site.com ; access_log /home/www/links.site.com/logs/nginx.access.log main; location ~* ^.+\.(jpeg|jpg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|swf|avi|mp3|mpg|mpeg|asf|vmw)$ { expires 30d; root /home/www/links.site.com/www; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location / { set $referer $http_referer; proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Referer $referer; proxy_set_header Host $host; client_max_body_size 10m; client_body_buffer_size 64k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } server { listen 95.169.187.234:80; server_name shop.site.com www.shop.site.com ; access_log /home/www/shop.site.com/logs/nginx.access.log main; location ~* ^.+\.(jpeg|jpg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|swf|avi|mp3|mpg|mpeg|asf|vmw)$ { expires 30d; root /home/www/shop.site.com/www; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location / { set $referer $http_referer; proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Referer $referer; proxy_set_header Host $host; client_max_body_size 10m; client_body_buffer_size 64k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } httpd.conf: # ServerRoot "/usr/local/apache2" PidFile /var/run/httpd.pid Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 15 Listen 127.0.0.1:8080 NameVirtualHost 127.0.0.1:8080 ... #Listen *:80 NameVirtualHost *:80 ServerName www.site.com ServerAlias site.com UseCanonicalName Off CustomLog /home/www/site.com/logs/custom_log combined ErrorLog /home/www/site.com/logs/error_log DocumentRoot /home/www/site.com/www AllowOverride All Options +FollowSymLinks Options -MultiViews Options -Indexes Options Includes Order allow,deny Allow from all DirectoryIndex index.html index.htm index.php ServerName www.links.site.com ServerAlias links.site.com UseCanonicalName Off CustomLog /home/www/links.site.com/logs/custom_log combined ErrorLog /home/www/links.site.com/logs/error_log DocumentRoot /home/www/links.site.com/www AllowOverride All Options +FollowSymLinks Options -MultiViews Options -Indexes Options Includes Order allow,deny Allow from all DirectoryIndex index.html index.htm index.php ServerName www.shop.site.com ServerAlias shop.site.com UseCanonicalName Off CustomLog /home/www/shop.site.com/logs/custom_log combined ErrorLog /home/www/shop.site.com/logs/error_log DocumentRoot /home/www/shop.site.com/www AllowOverride All Options +FollowSymLinks Options -MultiViews Options -Indexes Options Includes Order allow,deny Allow from all DirectoryIndex index.html index.htm index.php # if DSO load module first: LoadModule rpaf_module modules/mod_rpaf-2.0.so RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 RPAFheader X-Forwarded-For Include conf/virthost/*.conf

    Read the article

  • htaccess rewriterule leading slash

    - by Tiddo
    I'm using htaccess to rewrite my urls so that I can have nice clean urls. However, the same htaccess file does different things on my local server and my remote server: On my local server the url to the website is like http://localhost/example/ and on my remote server the url is http://example.com/. For my local server I can use the following htaccess redirect rule: RewriteRule ^(.*)$ index.php?page=$1 [L,QSA] However, when I use this on my remote server I get an internal server error. Instead I have to use this: (note the leading slash) RewriteRule ^(.*)$ /index.php?page=$1 [L,QSA] Unfortunately this doesn't work on my local server: this rewrite rule requests http://localhost/index.php instead of http://localhost/example/index.php on my local server. How can I make this work on both my remote and local server?

    Read the article

  • Excel, Lookup special characters and spaces.

    - by Sisyphus
    I have an excel, spreadsheet that has multiple sheets. The first sheet is an index of files, I am using the following forumla to look up a value in column A, references against the index sheet, if it matches then it copies the value from column B from the index sheet. The forumla is: =IF($A3="", "", (LOOKUP($A3, INDEX!$A$3:$A$26, INEDEX!B$3:B$26))) It works for data that has no spaces and special characters, anybody have any ideas why it doesn't work and how I can make it work? Thanks in advance.

    Read the article

  • nginx root directory not forwarding correctly

    - by user66700
    The server files are store in /var/www/ Everything was working perfectly, then I've been getting the following errors 2011/01/28 17:20:05 [error] 15415#0: *1117703 "/var/www/https:/secure.domain.com/index.html" is not found (2: No such file or directory), client: 119.110.28.211, server: secure.domain.com, request: "HEAD /https://secure.domain.com/ HTTP/1.1", host: "secure.domain.com" Heres my config: server { server_name secure.domain.com; listen 443; listen [::]:443 default ipv6only=on; gzip on; gzip_comp_level 1; gzip_types text/plain text/html text/css application/x-javascript text/xml text/javascript; error_log logs/ssl.error.log; gzip_static on; gzip_http_version 1.1; gzip_proxied any; gzip_disable "msie6"; gzip_vary on; ssl on; ssl_ciphers RC4:ALL:-LOW:-EXPORT:!ADH:!MD5; keepalive_timeout 0; ssl_certificate /root/server.pem; ssl_certificate_key /root/ssl.key; location / { root /var/www; index index.html index.htm index.php; } }

    Read the article

  • Why do I have such high overhead when creating tables in MSSQL7?

    - by scotty2012
    I have an old system running MSSQL7. It takes about 10.5 seconds to create the table below and another 30 seconds to add the index. Is there anything I can do to decrease these times? CREATE TABLE [dbo].[MyTable] ( [queue] [int] NOT NULL , [seqNum] [numeric](12, 0) NOT NULL , [cTime] [char] (14) NOT NULL , [msg] [char] (255) NULL , [status] [int] NOT NULL , [socket] [int] NULL ) ON [PRIMARY] GO CREATE INDEX [search] ON [dbo].[MyTable]([queue], [seqNum], [status]) ON [PRIMARY] GO CREATE INDEX [new] ON [dbo].[MyTable]([queue], [status]) ON [PRIMARY]

    Read the article

  • Apache mod_rewrite not working properly on Mac OS X 10.6 (Snow Leopard)

    - by DashRantic
    Hello all, I'm trying to create a PHP website with clean URLs with Apache's mod_rewrite, using a .htaccess file. mod_rewrite seems to be working, however, it claims it cannot find files on my server that do exist. Just as a basic test, this is what my .htaccess file looks like at the moment--going to [mysite]/page should redirect to the index.php file: Options +FollowSymLinks RewriteEngine on RewriteRule ^page$ index.php Afaik, I have setup the .conf file appropriately as well: <Directory "/Users/myuser/Sites/"> Options Indexes MultiViews AllowOverride All Order allow,deny Allow from all </Directory> However, when I try accessing the URL setup via mod_rewrite ( localhost/~myuser/mysite/page ), I get this: Not Found The requested URL /Users/myuser/Sites/mysite/index.php was not found on this server. However, that file does exist, and that is the proper location! The site works fine otherwise, if I go to localhost/~myuser/mysite/index.php, everything works fine--minus any sort of clean URLs, of course. Has anyone seen this before/have any ideas as to what I'm doing wrong?

    Read the article

  • nginx location pathing issue

    - by Michael Jefferys
    I've got a pretty much default sites-enabled set up in my nginx on debian squeeze and i'm now trying to get it to serve up my munin graphs on myhost/munin/ Heres the location i've added to the config location /munin { root /var/cache/munin/www/; index index.htm index.html; } And here is the error I recieve: 2012/07/09 23:52:03 [error] 3598#0: *13 "/var/cache/munin/www/munin/index.htm" is not found (2: No such file or directory), client: 93.*.*.*, server: , request: "GET /munin/ HTTP/1.1", host: "" This set up used to 'just work' in apache. I'm new to nginx so a bit lost as to why its adding the extra /munin when looking for the path. Any advice?

    Read the article

  • jquery with ASP.NET MVC - calling ajax enabled web service

    - by dcp
    This is a bit of a continuation of a previous question. Now I'm trying to make a call to an AJAX enabled web service which I have defined within the ASP.NET MVC application (i.e. the MovieService.svc). But the service is never being called in my getMovies javascript function. This same technique of calling the AJAX web service works ok if I try it in a non ASP.NET MVC application, so it makes me wonder if maybe the ASP MVC routes are interfering with things somehow when it tries to make the AJAX web service call. Do you have any idea why my web service isn't getting called? Code below. <script src="<%= ResolveClientUrl("~/scripts/jquery-1.4.2.min.js") %>" type="text/javascript"></script> <script src="<%= ResolveClientUrl("~/scripts/grid.locale-en.js") %>" type="text/javascript"></script> <script src="<%= ResolveClientUrl("~/scripts/jquery-ui-1.8.1.custom.min.js") %>" type="text/javascript"></script> <script src="<%= ResolveClientUrl("~/scripts/jquery.jqGrid.min.js") %>" type="text/javascript"></script> <script type="text/javascript"> var lastsel2; function successFunction(jsondata) { debugger var thegrid = jQuery("#editgrid"); for (var i = 0; i < jsondata.d.length; i++) { thegrid.addRowData(i + 1, jsondata.d[i]); } } function getMovies() { debugger // ***** the MovieService#GetMovies method never gets called $.ajax({ url: 'MovieService.svc/GetMovies', data: "{}", // For empty input data use "{}", dataType: "json", type: "GET", contentType: "application/json; charset=utf-8", success: successFunction }); } jQuery(document).ready(function() { jQuery("#editgrid").jqGrid({ datatype: getMovies, colNames: ['id', 'Movie Name', 'Directed By', 'Release Date', 'IMDB Rating', 'Plot', 'ImageURL'], colModel: [ { name: 'id', index: 'Id', width: 55, sortable: false, hidden: true, editable: false, editoptions: { readonly: true, size: 10} }, { name: 'Movie Name', index: 'Name', width: 250, editable: true, editoptions: { size: 10} }, { name: 'Directed By', index: 'Director', width: 250, align: 'right', editable: true, editoptions: { size: 10} }, { name: 'Release Date', index: 'ReleaseDate', width: 100, align: 'right', editable: true, editoptions: { size: 10} }, { name: 'IMDB Rating', index: 'IMDBUserRating', width: 100, align: 'right', editable: true, editoptions: { size: 10} }, { name: 'Plot', index: 'Plot', width: 150, hidden: false, editable: true, editoptions: { size: 30} }, { name: 'ImageURL', index: 'ImageURL', width: 55, hidden: true, editable: false, editoptions: { readonly: true, size: 10} } ], pager: jQuery('#pager'), rowNum: 5, rowList: [5, 10, 20], sortname: 'id', sortorder: "desc", height: '100%', width: '100%', viewrecords: true, imgpath: '/Content/jqGridCss/redmond/images', caption: 'Movies from 2008', editurl: '/Home/EditMovieData/', caption: 'Movie List' }); $("#bedata").click(function() { var gr = jQuery("#editgrid").jqGrid('getGridParam', 'selrow'); if (gr != null) jQuery("#editgrid").jqGrid('editGridRow', gr, { height: 280, reloadAfterSubmit: false }); else alert("Hey dork, please select a row"); }); }); </script> <h2> <%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website"> http://asp.net/mvc</a>. </p> <table id="editgrid"> </table> <div id="pager" style="text-align: center;"> </div> <input type="button" id="bedata" value="Edit Selected" /> Here's my RegisterRoutes code: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("*MovieService.svc*"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } Here's what my MovieService class looks like: namespace jQueryMVC { [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class MovieService { // Add [WebGet] attribute to use HTTP GET [OperationContract] [WebGet(ResponseFormat = WebMessageFormat.Json)] public IList<Movie> GetMovies() { return Persistence.GetMovies(); } } }

    Read the article

  • jqGrid multiplesearch - How do I add and/or column for each row?

    - by jimasp
    I have a jqGrid multiple search dialog as below: (Note the first empty td in each row). What I want is a search grid like this (below), where: The first td is filled in with "and/or" accordingly. The corresponding search filters are also built that way. The empty td is there by default, so I assume that this is a standard jqGrid feature that I can turn on, but I can't seem to find it in the docs. My code looks like this: <script type="text/javascript"> $(document).ready(function () { var lastSel; var pageSize = 10; // the initial pageSize $("#grid").jqGrid({ url: '/Ajax/GetJqGridActivities', editurl: '/Ajax/UpdateJqGridActivities', datatype: "json", colNames: [ 'Id', 'Description', 'Progress', 'Actual Start', 'Actual End', 'Status', 'Scheduled Start', 'Scheduled End' ], colModel: [ { name: 'Id', index: 'Id', searchoptions: { sopt: ['eq', 'ne']} }, { name: 'Description', index: 'Description', searchoptions: { sopt: ['eq', 'ne']} }, { name: 'Progress', index: 'Progress', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ActualStart', index: 'ActualStart', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ActualEnd', index: 'ActualEnd', editable: true, searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'Status', index: 'Status.Name', editable: true, searchoptions: { sopt: ['eq', 'ne']} }, { name: 'ScheduledStart', index: 'ScheduledStart', searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} }, { name: 'ScheduledEnd', index: 'ScheduledEnd', searchoptions: { sopt: ['eq', 'ne', "gt", "ge", "lt", "le"]} } ], jsonReader: { root: 'rows', id: 'Id', repeatitems: false }, rowNum: pageSize, // this is the pageSize rowList: [pageSize, 50, 100], pager: '#pager', sortname: 'Id', autowidth: true, shrinkToFit: false, viewrecords: true, sortorder: "desc", caption: "JSON Example", onSelectRow: function (id) { if (id && id !== lastSel) { $('#grid').jqGrid('restoreRow', lastSel); $('#grid').jqGrid('editRow', id, true); lastSel = id; } } }); // change the options (called via searchoptions) var updateGroupOpText = function ($form) { $('select.opsel option[value="AND"]', $form[0]).text('and'); $('select.opsel option[value="OR"]', $form[0]).text('or'); $('select.opsel', $form[0]).trigger('change'); }; // $(window).bind('resize', function() { // ("#grid").setGridWidth($(window).width()); //}).trigger('resize'); // paging bar settings (inc. buttons) // and advanced search $("#grid").jqGrid('navGrid', '#pager', { edit: true, add: false, del: false }, // buttons {}, // edit option - these are ordered params! {}, // add options {}, // del options {closeOnEscape: true, multipleSearch: true, closeAfterSearch: true, onInitializeSearch: updateGroupOpText, afterRedraw: updateGroupOpText}, // search options {} ); // TODO: work out how to use global $.ajaxSetup instead $('#grid').ajaxError(function (e, jqxhr, settings, exception) { var str = "Ajax Error!\n\n"; if (jqxhr.status === 0) { str += 'Not connect.\n Verify Network.'; } else if (jqxhr.status == 404) { str += 'Requested page not found. [404]'; } else if (jqxhr.status == 500) { str += 'Internal Server Error [500].'; } else if (exception === 'parsererror') { str += 'Requested JSON parse failed.'; } else if (exception === 'timeout') { str += 'Time out error.'; } else if (exception === 'abort') { str += 'Ajax request aborted.'; } else { str += 'Uncaught Error.\n' + jqxhr.responseText; } alert(str); }); $("#grid").jqGrid('bindKeys'); $("#grid").jqGrid('inlineNav', "#grid"); }); </script> <table id="grid"/> <div id="pager"/>

    Read the article

  • Merging .net object graph

    - by Tiju John
    Hi guys, has anyone come across any scenario wherein you needed to merge one object with another object of same type, merging the complete object graph. for e.g. If i have a person object and one person object is having first name and other the last name, some way to merge both the objects into a single object. public class Person { public Int32 Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } } public class MyClass { //both instances refer to the same person, probably coming from different sources Person obj1 = new Person(); obj1.Id=1; obj1.FirstName = "Tiju"; Person obj2 = new Person(); ojb2.Id=1; obj2.LastName = "John"; //some way of merging both the object obj1.MergeObject(obj2); //?? //obj1.Id // = 1 //obj1.FirstName // = "Tiju" //obj1.LastName // = "John" } I had come across such type of requirement and I wrote an extension method to do the same. public static class ExtensionMethods { private const string Key = "Id"; public static IList MergeList(this IList source, IList target) { Dictionary itemData = new Dictionary(); //fill the dictionary for existing list string temp = null; foreach (object item in source) { temp = GetKeyOfRecord(item); if (!String.IsNullOrEmpty(temp)) itemData[temp] = item; } //if the same id exists, merge the object, otherwise add to the existing list. foreach (object item in target) { temp = GetKeyOfRecord(item); if (!String.IsNullOrEmpty(temp) && itemData.ContainsKey(temp)) itemData[temp].MergeObject(item); else source.Add(item); } return source; } private static string GetKeyOfRecord(object o) { string keyValue = null; Type pointType = o.GetType(); if (pointType != null) foreach (PropertyInfo propertyItem in pointType.GetProperties()) { if (propertyItem.Name == Key) { keyValue = (string)propertyItem.GetValue(o, null); } } return keyValue; } public static object MergeObject(this object source, object target) { if (source != null && target != null) { Type typeSource = source.GetType(); Type typeTarget = target.GetType(); //if both types are same, try to merge if (typeSource != null && typeTarget != null && typeSource.FullName == typeTarget.FullName) if (typeSource.IsClass && !typeSource.Namespace.Equals("System", StringComparison.InvariantCulture)) { PropertyInfo[] propertyList = typeSource.GetProperties(); for (int index = 0; index < propertyList.Length; index++) { Type tempPropertySourceValueType = null; object tempPropertySourceValue = null; Type tempPropertyTargetValueType = null; object tempPropertyTargetValue = null; //get rid of indexers if (propertyList[index].GetIndexParameters().Length == 0) { tempPropertySourceValue = propertyList[index].GetValue(source, null); tempPropertyTargetValue = propertyList[index].GetValue(target, null); } if (tempPropertySourceValue != null) tempPropertySourceValueType = tempPropertySourceValue.GetType(); if (tempPropertyTargetValue != null) tempPropertyTargetValueType = tempPropertyTargetValue.GetType(); //if the property is a list IList ilistSource = tempPropertySourceValue as IList; IList ilistTarget = tempPropertyTargetValue as IList; if (ilistSource != null || ilistTarget != null) { if (ilistSource != null) ilistSource.MergeList(ilistTarget); else propertyList[index].SetValue(source, ilistTarget, null); } //if the property is a Dto else if (tempPropertySourceValue != null || tempPropertyTargetValue != null) { if (tempPropertySourceValue != null) tempPropertySourceValue.MergeObject(tempPropertyTargetValue); else propertyList[index].SetValue(source, tempPropertyTargetValue, null); } } } } return source; } } However, this works when the source property is null, if target has it, it will copy that to source. IT can still be improved to merge when inconsistencies are there e.g. if FirstName="Tiju" and FirstName="John" Any commments appreciated. Thanks TJ

    Read the article

  • Make interchangeable class types via pointer casting only, without having to allocate any new objects?

    - by HostileFork
    UPDATE: I do appreciate "don't want that, want this instead" suggestions. They are useful, especially when provided in context of the motivating scenario. Still...regardless of goodness/badness, I've become curious to find a hard-and-fast "yes that can be done legally in C++11" vs "no it is not possible to do something like that". I want to "alias" an object pointer as another type, for the sole purpose of adding some helper methods. The alias cannot add data members to the underlying class (in fact, the more I can prevent that from happening the better!) All aliases are equally applicable to any object of this type...it's just helpful if the type system can hint which alias is likely the most appropriate. There should be no information about any specific alias that is ever encoded in the underlying object. Hence, I feel like you should be able to "cheat" the type system and just let it be an annotation...checked at compile time, but ultimately irrelevant to the runtime casting. Something along these lines: Node<AccessorFoo>* fooPtr = Node<AccessorFoo>::createViaFactory(); Node<AccessorBar>* barPtr = reinterpret_cast< Node<AccessorBar>* >(fooPtr); Under the hood, the factory method is actually making a NodeBase class, and then using a similar reinterpret_cast to return it as a Node<AccessorFoo>*. The easy way to avoid this is to make these lightweight classes that wrap nodes and are passed around by value. Thus you don't need casting, just Accessor classes that take the node handle to wrap in their constructor: AccessorFoo foo (NodeBase::createViaFactory()); AccessorBar bar (foo.getNode()); But if I don't have to pay for all that, I don't want to. That would involve--for instance--making a special accessor type for each sort of wrapped pointer (AccessorFooShared, AccessorFooUnique, AccessorFooWeak, etc.) Having these typed pointers being aliased for one single pointer-based object identity is preferable, and provides a nice orthogonality. So back to that original question: Node<AccessorFoo>* fooPtr = Node<AccessorFoo>::createViaFactory(); Node<AccessorBar>* barPtr = reinterpret_cast< Node<AccessorBar>* >(fooPtr); Seems like there would be some way to do this that might be ugly but not "break the rules". According to ISO14882:2011(e) 5.2.10-7: An object pointer can be explicitly converted to an object pointer of a different type.70 When a prvalue v of type "pointer to T1" is converted to the type "pointer to cv T2", the result is static_cast(static_cast(v)) if both T1 and T2 are standard-layout types (3.9) and the alignment requirements of T2 are no stricter than those of T1, or if either type is void. Converting a prvalue of type "pointer to T1" to the type "pointer to T2" (where T1 and T2 are object types and where the alignment requirements of T2 are no stricter than those of T1) and back to its original type yields the original pointer value. The result of any other such pointer conversion is unspecified. Drilling into the definition of a "standard-layout class", we find: has no non-static data members of type non-standard-layout-class (or array of such types) or reference, and has no virtual functions (10.3) and no virtual base classes (10.1), and has the same access control (clause 11) for all non-static data members, and has no non-standard-layout base classes, and either has no non-static data member in the most-derived class and at most one base class with non-static data members, or has no base classes with non-static data members, and has no base classes of the same type as the first non-static data member. Sounds like working with something like this would tie my hands a bit with no virtual methods in the accessors or the node. Yet C++11 apparently has std::is_standard_layout to keep things checked. Can this be done safely? Appears to work in gcc-4.7, but I'd like to be sure I'm not invoking undefined behavior.

    Read the article

  • TypeError: Object {...} has no method 'find' - when using mongoose with express

    - by sdouble
    I'm having trouble getting data from MongoDB using mongoose schemas with express. I first tested with just mongoose in a single file (mongoosetest.js) and it works fine. But when I start dividing it all up with express routes and config files, things start to break. I'm sure it's something simple, but I've spent the last 3 hours googling and trying to figure out what I'm doing wrong and can't find anything that matches my process enough to compare. mongoosetest.js (this works fine, but not for my application) var mongoose = require('mongoose'); mongoose.connect('mongodb://localhost/meanstack'); var db = mongoose.connection; var userSchema = mongoose.Schema({ name: String }, {collection: 'users'}); var User = mongoose.model('User', userSchema); User.find(function(err, users) { console.log(users); }); These files are where I'm having issues. I'm sure it's something silly, probably a direct result of using external files, exports, and requires. My server.js file just starts up and configures express. I also have a routing file and a db config file. routing file (allRoutes.js) var express = require('express'); var router = express.Router(); var db = require('../config/db'); var User = db.User(); // routes router.get('/user/list', function(req, res) { User.find(function(err, users) { console.log(users); }); }); // catch-all route router.get('*', function(req, res) { res.sendfile('./public/index.html'); }); module.exports = router; dbconfig file (db.js) var mongoose = require('mongoose'); var dbHost = 'localhost'; var dbName = 'meanstack'; var db = mongoose.createConnection(dbHost, dbName); var Schema = mongoose.Schema, ObjectId = Schema.ObjectId; db.once('open', function callback() { console.log('connected'); }); // schemas var User = new Schema({ name : String }, {collection: 'users'}); // models mongoose.model('User', User); var User = mongoose.model('User'); //exports module.exports.User = User; I receive the following error when I browse to localhost:3000/user/list TypeError: Object { _id: 5398bed35473f98c494168a3 } has no method 'find' at Object.module.exports [as handle] (C:\...\routes\allRoutes.js:8:8) at next_layer (C:\...\node_modules\express\lib\router\route.js:103:13) at Route.dispatch (C:\...\node_modules\express\lib\router\route.js:107:5) at C:\...\node_modules\express\lib\router\index.js:213:24 at Function.proto.process_params (C:\...\node_modules\express\lib\router\index.js:284:12) at next (C:\...\node_modules\express\lib\router\index.js:207:19) at Function.proto.handle (C:\...\node_modules\express\lib\router\index.js:154:3) at Layer.router (C:\...\node_modules\express\lib\router\index.js:24:12) at trim_prefix (C:\...\node_modules\express\lib\router\index.js:255:15) at C:\...\node_modules\express\lib\router\index.js:216:9 Like I said, it's probably something silly that I'm messing up with trying to organize my code since my single file (mongoosetest.js) works as expected. Thanks.

    Read the article

  • Dynamic Type to do away with Reflection

    - by Rick Strahl
    The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot. The old Way - Reflection Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this: string path = Page.Request.Url.AbsolutePath; Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-)) If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils: /// <summary> /// Retrieve a property value from an object dynamically. This is a simple version /// that uses Reflection calls directly. It doesn't support indexers. /// </summary> /// <param name="instance">Object to make the call on</param> /// <param name="property">Property to retrieve</param> /// <returns>Object - cast to proper type</returns> public static object GetProperty(object instance, string property) { return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null); } If you want more control over properties and support both fields and properties as well as array indexers a little more work is required: /// <summary> /// Parses Properties and Fields including Array and Collection references. /// Used internally for the 'Ex' Reflection methods. /// </summary> /// <param name="Parent"></param> /// <param name="Property"></param> /// <returns></returns> private static object GetPropertyInternal(object Parent, string Property) { if (Property == "this" || Property == "me") return Parent; object result = null; string pureProperty = Property; string indexes = null; bool isArrayOrCollection = false; // Deal with Array Property if (Property.IndexOf("[") > -1) { pureProperty = Property.Substring(0, Property.IndexOf("[")); indexes = Property.Substring(Property.IndexOf("[")); isArrayOrCollection = true; } // Get the member MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0]; if (member.MemberType == MemberTypes.Property) result = ((PropertyInfo)member).GetValue(Parent, null); else result = ((FieldInfo)member).GetValue(Parent); if (isArrayOrCollection) { indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty); if (result is Array) { int Index = -1; int.TryParse(indexes, out Index); result = CallMethod(result, "GetValue", Index); } else if (result is ICollection) { if (indexes.StartsWith("\"")) { // String Index indexes = indexes.Trim('\"'); result = CallMethod(result, "get_Item", indexes); } else { // assume numeric index int index = -1; int.TryParse(indexes, out index); result = CallMethod(result, "get_Item", index); } } } return result; } /// <summary> /// Returns a property or field value using a base object and sub members including . syntax. /// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company") /// This method also supports indexers in the Property value such as: /// Customer.DataSet.Tables["Customers"].Rows[0] /// </summary> /// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param> /// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param> /// <returns></returns> public static object GetPropertyEx(object Parent, string Property) { Type type = Parent.GetType(); int at = Property.IndexOf("."); if (at < 0) { // Complex parse of the property return GetPropertyInternal(Parent, Property); } // Walk the . syntax - split into current object (Main) and further parsed objects (Subs) string main = Property.Substring(0, at); string subs = Property.Substring(at + 1); // Retrieve the next . section of the property object sub = GetPropertyInternal(Parent, main); // Now go parse the left over sections return GetPropertyEx(sub, subs); } As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily. Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this: string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy. Enter dynamic – way easier! .NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes: object objPage = Page; // force to object for contrivance :) dynamic page = objPage; // convert to dynamic from untyped object string scriptUrl = page.Request.Url.AbsolutePath; The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.: In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain. Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic. Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance: string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"]; The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax. Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance: void Reflection() { Stopwatch stop = new Stopwatch(); stop.Start(); for (int i = 0; i < reps; i++) { // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string; string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string; } stop.Stop(); Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString()); } void Dynamic() { Stopwatch stop = new Stopwatch(); stop.Start(); dynamic page = Page; for (int i = 0; i < reps; i++) { string url = page.Title; //Request.Url.AbsolutePath; } stop.Stop(); Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString()); } The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection. Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • importing a VCard in the address book , objective C [migrated]

    - by user1044771
    I am designing a QR code reader, and it needs to detect and import contact cards in vCard format. is there a way to add the card data to the system Address Book directly, or do I need to parse the vCard myself and add each field individually? I will be getting the VCArd in a NSString format I tried the code below (from a different post) and didn't work -(IBAction)saveContacts{ NSString *vCardString = @"vCardDataHere"; CFDataRef vCardData = (__bridge_retained CFDataRef)[vCardString dataUsingEncoding:NSUTF8StringEncoding]; ABAddressBookRef book = ABAddressBookCreate(); ABRecordRef defaultSource = ABAddressBookCopyDefaultSource(book); CFArrayRef vCardPeople = ABPersonCreatePeopleInSourceWithVCardRepresentation(defaultSource, vCardData); for (CFIndex index = 0; index < CFArrayGetCount(vCardPeople); index++) { ABRecordRef person = CFArrayGetValueAtIndex(vCardPeople, index); ABAddressBookAddRecord(book, person, NULL); CFRelease(person); } CFRelease(vCardPeople); CFRelease(defaultSource); ABAddressBookSave(book, NULL); CFRelease(book); } I have searched a bit and fixed the code and here how it looks like it doesn t crash anymore but it doesn t save the VCard (NSString format) in the address book , any clues ?

    Read the article

  • Pagination, Duplicate Content, and SEO

    - by Iamtotallylost
    Please consider a list of items (forum comments, articles, shoes, doesn't matter) which are spread over multiple pages. Different sort orders are supported (by date, by popularity, by price, etc). So, an URL might look like this (I use the query style here to simplify things): /items?id=1234&page=42&sort=popularity /items?id=1234&page=5&sort=date Now, in terms of SEO, I think I should be worried about duplicate content. After all, each item appears at least as many times as there are sort orders. I've seen Matt Cutts talking about the rel=canonical link tag, but he also said that the canonical page should have very similar content. But this is not the case here because page #1 in a non-canonical sort order might have completely different items than page #1 in the canonical sort order. For a given non-canonical page, there is no clear canonical page listing all the same items, so I think rel=canonical won't help here. Then I thought about using the noindex meta tag on all pages with non-canonical sort order, and not using it on all pages with canonical sort order. However, if I use that method, what will happen with backlinks that are going to non-canonical pages -- will they still spread their page rank juice, even though the first page googlebot (or any other crawler) is going to encounter is marked as "noindex"? Can you please comment on my problem and what you think is the best solution? If you think you have a better solution, please consider that 1) I do not want to use Javascript for this, 2) I do not want all the items to be on one page. Thank you.

    Read the article

  • Free Certification Exams for Visual Studio 2010

    - by budugu
    Get promotional codes from herehttp://blogs.msdn.com/gerryo/archive/2010/03/17/register-for-visual-studio-2010-beta-exams.aspx You don’t have to pay anything to take these exams.  These are 100% free. If you pass the exam, you earn the certification just the same as if you took it in a non-beta environment. From Gerry O'Brien’s blog...  2) Is this a real exam? – Yes it is.  Even though the questions are not scored at the time you take the exam, they are real questions and the exam is real.  If you pass the exam, you earn the certification just the same as if you took it in a non-beta environment.  This means you don’t get a pass/fail or score immediately following the exam, but you do get notified 8 to 10 weeks later because we move slow in getting the final scoring in place.  4) What is the main difference between a beta and non-beta exam, besides cost? – The beta exam will show you questions that have not been through a final QA check.  You are that final QA check.  Non-beta exams expose you to 40 or 45 questions and you have a total of two hours to complete it.  The beta exam could expose you to as many as 125 to 150 questions and take up to four hours.   Following exams are for Asp.Net developers Exam 71-515, TS: Web Applications Development with Microsoft .NET Framework 4Exam 71-519: Pro: Designing and Developing Web Applications Using Microsoft .NET Framework 4

    Read the article

  • Lucene and .NET Part I

    - by javarg
    I’ve playing around with Lucene.NET and trying to get a feeling of what was required to develop and implement a full business application using it. As you would imagine, many things are required for you to implement a robust solution for indexing content and searching it afterwards. Lucene is a great and robust solution for indexing content. It offers fast and performance enhanced search engine library available in Java and .NET. You will want to use this library in many particular scenarios: In Windows Azure, to support Full Text Search (a functionality not currently supported by SQL Azure) When storing files outside or not managed by your database (like in large document storage solutions that uses File System) When Full Text Search is not really what you need Lucene is more than a Full Text Search solution. It has several analyzers that let you process and search content in different ways (decomposing sentences, deriving words, removing articles, etc.). When deciding to implement indexing using Lucene, you will need to take into account the following: How content is to be indexed by Lucene and when. Using a service that runs after a specific interval Immediately when content changes When content is to available for searching / Availability of indexed content (as in real time content search) Immediately when content changes = near real time searching After a few minutes.. Ease of maintainability and development Some Technical Concerns.. When indexing content, indexes are locked for writing operations by the Index Writer. This means that Lucene is best designed to index content using single writer approach. When searching, Index Readers take a snapshot of indexes. This has the following implications: Setting up an index reader is a costly task. Your are not supposed to create one for each query or search. A good practice is to create readers and reuse them for several searches. The latter means that even when the content gets updated, you wont be able to see the changes. You will need to recycle the reader. In the second part of this post we will review some alternatives and design considerations.

    Read the article

  • Userful Resources on ADF/JSF/JDEVELOPER

    - by vijaykumar.yenne
    In most of my interactions with the partner developer community who are working on either Webcenter Projects or ADF related Projects, there are constant questions that come up on the documentation or samples or step by step instructions for novices. Though most of the resources are available online on the OTN site, there seems to be a difficulty in getting hold of the right resource for their job to be done, which i am yet to solve. However here is a list of resources that you should have been to if you in the oracle world and building rich internet based applications using JSF/ADF. 1. If you have just started with JDeveloper and wanted to the different nuances of ADF developement and want to deepen your knowledge you should definitely go through these tutorials: http://www.oracle.com/technology/products/jdev/11/cuecards111/index.html 2. Everything about JDEV - includes the IDE download, demos, sample code, best practices etc. http://www.oracle.com/technology/products/jdev/index.html 3. All About ADF: http://www.oracle.com/technology/products/adf/index.html 4. Know more about ADF Faces : http://www.oracle.com/technology/products/adf/index.html 5. If you want to deepen your knowledge here is the aggregate list of all the blogs by our internal development teams and experts from around the globe. This is really an interesting feed especially when you want to do a deep dive on various aspects and want to be an expert in the oracle UI world. http://www.connotea.org/user/jdeveloper Last but not the least, you should always leverage the entire community whenever you run into any issues : http://forums.oracle.com

    Read the article

  • Lost in Translation

    - by antony.reynolds
    Using the Correct Character Set for the SOA Suite Database A couple of years ago I spent a wonderful week in Tel Aviv helping with the first Oracle BAM implementation in Israel.  Although everyone I interacted spoke better English than I did, the screens and data for the implementation were all in Hebrew, meaning the Hebrew alphabet.  Over the week I learnt to recognize a few Hebrew words, enough to enable me to test what we were doing.  So I knew SOA Suite worked OK with non-English and non-Latin character sets so I was suspicious recently when a customer was having data corruption of non-Latin characters.  On investigation it turned out that the data received correctly in the SOA Suite, but then it was corrupted after being stored in the database. A little investigation revealed that the customer was using the default database character set, which is “WE8ISO8859P1” which, as the name suggests only supports West European 8-bit characters.  What was happening was that when the customer had installed his SOA repository he had ignored the message that his database was not using AL32UTF as the character. After changing the character set on his database he no longer saw the corruption of non-English character data. So the moral of this story is Always install the SOA Repository in to an AL32UTF8 Database This is true for both SOA Suite 10g and 11g.  Ignore it at your peril, because you never know when you will need to support Hebrew, or Japanese or another multi-byte character set.

    Read the article

  • Site in subdomain (MaraDNS + Nginx)

    - by Grzegorz
    Welcome, Actually I'm doing some experiments on my VPS with Ubuntu. I've installed MaraDNS with Nginx. At this moment I've correctly launch static site which is available from Internet (maindomain.com). In next step I want to add new site which will be available in subdomain, for example dev.maindomain.com. I've tried to db.maindomain.com file (used by MaraDNS): maindomain.com. xxx.xxx.xxx.xxx www.maindomain.com. CNAME maindomain.com. dev.maindomain.com. xxx.xxx.xxx.xxx Where xxx.xxx.xxx.xxx is VPS IP address. In nginx.conf I have: server { listen 80; server_name maindomain.com; access_log /var/log/nginx/maindomain.com.log location / { root /var/www/maindomain.com; index index.html; } } server { listen 80; server_name dev.maindomain.com; access_log /var/log/nginx/dev.maindomain.com.log location / { root /var/www/dev.maindomain.com; index index.html; } } With this configuration maindomain.com works properly, but dev.maindomain.com isn't available. When I try: ping dev.maindomain.com then I get my xxx.xxx.xxx.xxx IP. Do you have any suggestions how can I resolve this problem?

    Read the article

  • Multiline Replacement With Visual Studio

    - by Alois Kraus
    I had to remove some file headers in a bigger project which were all of the form #region File Header /*[ Compilation unit ----------------------------------------------------------       Name            : Class1.cs       Language        : C#     Creation Date   :      Description     : -----------------------------------------------------------------------------*/ /*] END */ #endregion I know that would be a cool thing to write a simple C# program use a recursive file search, read all lines skip the first n lines and write the files back to disc. But I wanted to test things first before I ruin my source files with one little typo. There comes the Visual Studio Search and Replace in Files dialog into the game. I can test my regular expression to do a multiline match with the Find button before actually breaking anything. And if something goes wrong I have the Undo button.   There is a nice blog post from Paulo Morgado online who deals with Multiline Regular expressions. The Visual Studio Regular expressions are non standard so you have to adapt your usual Regex know how to the other patterns. The pattern I cam finally up with is \#region File Header:b*(.*\n)@\#endregion The Regular expression can be read as \#region File Header Match “#region File Header” \# Escapes the # character since it is a quantifier. :b* After this none or more spaces or tabs can follow (:b stands for space or tab) (.*\n)@ Match anything across lines in a non greedy way (the @ character makes it non greedy) to prevent matching too much until the #endregion somewhere in our source file. \#endregion Match everything until “#endregion” is found I had always knew that Visual Studio can do it but I never bothered to learn the non standard Regex syntax. This is powerful and it is inside Visual Studio since 2005!

    Read the article

  • Switch vs Polymorphism when dealing with model and view

    - by Raphael Oliveira
    I can't figure out a better solution to my problem. I have a view controller that presents a list of elements. Those elements are models that can be an instance of B, C, D, etc and inherit from A. So in that view controller, each item should go to a different screen of the application and pass some data when the user select one of them. The two alternatives that comes to my mind are (please ignore the syntax, it is not a specific language) 1) switch (I know that sucks) //inside the view controller void onClickItem(int index) { A a = items.get(index); switch(a.type) { case b: B b = (B)a; go to screen X; x.v1 = b.v1; // fill X with b data x.v2 = b.v2; case c: go to screen Y; etc... } } 2) polymorphism //inside the view controller void onClickItem(int index) { A a = items.get(index); Screen s = new (a.getDestinationScreen()); //ignore the syntax s.v1 = a.v1; // fill s with information about A s.v2 = a.v2; show(s); } //inside B Class getDestinationScreen(void) { return Class(X); } //inside C Class getDestinationScreen(void) { return Class(Y); } My problem with solution 2 is that since B, C, D, etc are models, they shouldn't know about view related stuff. Or should they in that case?

    Read the article

  • Ubuntu 10.04 LTS Server - fresh install - failed apt-get update

    - by user87227
    Good day and greetings to all, I just did a fresh installation of Ubuntu 10.04 LTS server without any issues. However, the apt-get update or aptitude update is giving the following errors: a. bzip2:(stdin) is not bzip2 file.ign for all lines plus the following errors : etched 3,582B in 0s (74.1kB/s) Reading package lists... W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //security.ubuntu.com lucid-security Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //in.archive.ubuntu.com lucid Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: in.archive.ubuntu.com lucid-updates Release: The following signatures were invalid: NODATA 1 NODATA 2 W: Failed to fetch security.ubuntu.com/ubuntu/dists/lucid-security/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid-updates/Release W: Some index files failed to download, they have been ignored, or old ones used instead. Please guide in resolving this error. TIA Regards Venu

    Read the article

  • How do I change pages registering as 404 to 200

    - by christian
    I have this problem. After relaunching my site: http://www.kgstiles.com, traffic dropped immensely(about 60%). After troubleshooting for a week and a half - losing thousands of dollars off of lost traffic in the process, I found that Google was getting a 404 error at the end of many of my 301 redirects(so it wouldn't index the new pages). Most of of the pages, though, would register in my browser. They registered as a 404 error in Google's index as well as a 404checker. So my first question is: could this be what's causing my loss of traffic? and second: how do I fix it? I'm desperate! Any help is appreciated! # BEGIN s2Member GZIP exclusions <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{QUERY_STRING} (^|\?|&)s2member_file_download\=.+ RewriteRule .* - [E=no-gzip:1] </IfModule> # END s2Member GZIP exclusions # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^moreinfo/(.*)$ http://www.kgstiles.com/moreinfo$1 [R=301] RewriteRule ^healthsolutions/(.*)$ http://www.kgstiles.com/healthsolutions$1 [R=301] RewriteRule ^(.*)\.html$ $1/ [R=301,L] RewriteRule ^(.*)\.htm$ $1/ [R=301,L] </IfModule> # END WordPress

    Read the article

  • Best Terminology for a particular php-based site architecture? [closed]

    - by hen3ry
    For a site... o whose overall look-and-feel is generated by one php page ("index.php"). o in which "index.php" provides for all pages served the following required components: The DOCTYPE, opening html tag, the head section, the opening body tag, the end-body tag, and end-html tag. o which uses computed hierarchical navigation menus within "index.php" to offer visitors access to the site content. o in which all content is stored in individual files that contain "headerless html". (The DOCTYPE, etc. etc. being being provided by "index.php" as described above.) Q1: what term best describes this architecture? I'm seeking a concise descriptor that is useful in conversation and definitive as a search term, in whole-web searches, and searching here on Pro Webmasters. Q2: what term best describes the individual content files? Same general goals for the descriptor as above. As you see above, I couldn't avoid using the term "headerless html", my best choice. But this term does not seem to be in general use. I've found some people use this term to describe such as my content files, but others use it quite differently.

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >