Search Results

Search found 45245 results on 1810 pages for 'html content extraction'.

Page 780/1810 | < Previous Page | 776 777 778 779 780 781 782 783 784 785 786 787  | Next Page >

  • nginx probably deliering wrong filetype for .css file with php tags

    - by Katai
    And again - NGINX is giving me many Questions today :) Like always, I already tried around for a while, but cant seem to fix this issue: I just configured NGINX to handle my .css files equal to my .php files (to parse PHP tags inside the CSS file). This works perfectly, and the file is found and delivered. I could debug it with FIrebug, and everything is OK (it displays the contents of the .css inside the opened <link> tag). So, everything working, right? Wrong. It has the CSS, but it does not interpret it! What I mean by this: apparently, the file-type of the CSS (or aplication-type, whatever) is wrong. The Page can access the CSS, but doesnt bother at all to actually use it. What I checked / tried: There are no PHP errors inside of the .css, so that one is out The .css is accessible. I can call the URI manually, or check if the included URL finds it: both works The .css has no syntax errors (i switched to a css that just has body {background-color: #000; } It works whitout NGINX I deleted the browser cache / restarted NGINX after config rewrites Here the configuration: server { listen 80; server_name localhost; access_log /var/log/nginx/board.access_log; error_log /var/log/nginx/board.error_log warn; root /var/www/board/public; index index.php; fastcgi_index index.php; location / { try_files $uri $uri /index.php; } location ~ (\.php|\.css)$ { try_files $uri =404; include /etc/nginx/fastcgi_params; #keepalive_timeout 0; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:7777; } } Firebug 'Network' Response Header: Connection keep-alive Content-Encoding gzip Content-Type text/html Date Sat, 16 Jun 2012 10:08:40 GMT Server nginx/1.0.5 Transfer-Encoding chunked X-Powered-By PHP/5.3.6-13ubuntu3.7 I think I just answered my own question. Is the Content-Type text/html the problem? How can I remove that? My personal guess is that I have to use this in some way include /etc/nginx/mime.types; default_type application/octet-stream; But I'm not sure... anyone an idea how to solve this? TLDR; CSS file is delivered correctly, but it doesnt seem to be 'used' as CSS from the browser. (Tested, works on apache)

    Read the article

  • linux build iso error

    - by Neil
    i played with linux customization and when i want to build the iso back i get this error: $ mkisofs -r -o rhel.iso -b isolinux/isolinux.bin -c isolinux/boot.cat ./ INFO: UTF-8 character encoding detected by locale settings. Assuming UTF-8 encoded filenames on source filesystem, use -input-charset to override. Unknown file type (unallocated) ./.. - ignoring and continuing. Using RELEA000.HTM;1 for /RELEASE-NOTES-pt_BR.html (RELEASE-NOTES-U1-pt_BR.html) Size of boot image is 20 sectors -> mkisofs: Error - boot image './isolinux/isolinux.bin' has not an allowable size. i didnt change the isolinux.bin why i ahve this error?

    Read the article

  • Apache not using the right SSL certificate [on hold]

    - by user2420318
    In my apache2 setup, I have one VirtualHost for my main site, and another for a static content site, like downloads, css, etc. I have ssl certificates for both, and the static content one is under a subdomain of the main site. I have configured the four virtualhosts altogether, as both sites need SSL ones as well. When I only had 1 SSL site, everything was OK, but now with the second, the first site uses the second site's certificate, even though it is told specifically to use its own in the VirtualHost section. I honestly have no idea why apache would do this. Any ideas? I have a feeling there may be some default/global setting or something that are set for some odd reason. I am using different IPs for the Virtual hosts.

    Read the article

  • Google Drive have a folder/file in both private and public

    - by Sander
    I've my documents in Google Drive/Docs neatly organised in structured folder system. For not cluttering email inboxes purposes I now would like to share the content of 1 single private folder by putting it in the public directory. If possible I'd like to do this without duplicating the content. Is it possible to organise files in Google Drive to that they have 2 tags/folders attached to them - 1 private and 1 public? Is there another way to quickly share/link a folder to my public folder without actually moving it away from my private to my public folder?

    Read the article

  • Strange email coming from/to my computer

    - by Micah
    I'm running smtp4dev on my machine to trap anything going in/out of my computer on port 25 for testing purposes. Every so often this email gets trapped and I have no idea what it's from. I have Microsoft Security Essentials running on my machine and it hasn't identified and viruses or anything so I'm not sure what's going on. Here's the content of the message: Received: from [125.180.72.4] by 173.162.7.130 SMTP id O2Ncv62Ghig1vR for <[email protected]>; Fri, 24 Jun 2011 20:36:15 +0200 Received: from [125.180.72.4] by 173.162.7.130 SMTP id O2Ncv62Ghig1vR for <[email protected]>; Fri, 24 Jun 2011 20:36:15 +0200 Message-ID: <[email protected]> From: "" <[email protected]> To: <[email protected]> Subject: BC_173.162.7.130 Date: Fri, 24 Jun 11 20:36:15 GMT MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_000D_01C2CC60.49F4EC70"

    Read the article

  • List repositories from multiple projects in Trac using mod_python

    - by Steffen Eriksen
    Currently working on a customized webpage that shows the available projects I have in Trac (1.0.1). I am using mod_python to connect the trac interface. I found a standard page for this, but it didn't show listing of repositories. The page showed some variables to link to the different projects, but I can't find variables to the different repositories inside the projects. I have set up the webpage from reading this: http://trac.edgewall.org/wiki/TracInterfaceCustomization (under Site Appearance) Short summary; editing ../conf.d/trac.conf: PythonOption TracEnvParentDir /parent/dir/of/projects PythonOption TracEnvIndexTemplate /path/to/template And making a template file I can edit at /path/to/template: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:py="http://genshi.edgewall.org/" xmlns:xi="http://www.w3.org/2001/XInclude"> <head> <title>Available Projects</title> </head> <body> <h1>Available Projects</h1> <ul> <dl> <li py:for="project in projects" py:choose=""> <a py:when="project.href" href="$project.href" title="$project.description">$project.name</a> ## <dd> WANT TO ADD CODE HERE! </dd> <py:otherwise> <small>$project.name: <em>Error</em> <br /> ($project.description)</small> </py:otherwise> </li> </dl> </ul> </body> </html> So... The code I want to add is something like: <dd py:for="repos in project.repository" py:choose=""> <a py:when="repos.href" href="$repos.href"> $repos.name</a> </dd> I can't figure out where to add the variables, or if there already exists some variables I can use. After searching through the files it seemed like main.py had something to do with the variables (/usr/local/Trac-1.0.1/trac/web/main.py), but at first look it didn't seem easy to just add more variables. Is there a simple way to find the rest of the variables ? And how hard is it to add more variables? Will it perhaps be easier to do this an alternative way ? All I need is to link to the repositories dynamically

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • Introducing Deep Fried Devcast

    - by Matt Christian
    I've been working on a new podcast for the game development community called the Deep Fried Devcast.  Currently we are in pre-production but should have some episodes up in the near future.  Here is a quick FAQ about the show: What is the Deep Fried Devcast? The Deep Fried Devcast is a bi-weekly show all about game development.  The show will feature developer interviews, a focus on the technical aspects of game development (programming, technical design), the business of team game development (time management, project management), and other areas focused around the actual development of games. Wait, no game design?  No game discussions?! Calm down, calm down.  Although the focus of the podcast is on the technical aspects of game dev, there will be episodes and content focused on all areas of the gaming industry, including discussion on design, story, recent game releases, games we've been playing, etc...  Anything could show up in the Deep Fried Devcast and nothing is off limits. How can I help? We're always looking for new content ideas, emails, and anything you want to send us (within some kind of reason!).  You can even be a guest host if you want!  Email us at: deepfrieddevcast [AT] gmail [DOT] com Where's the podcast?! We're still recording it!  Don't worry, it will be up soon.  Keep an eye on www.deepfrieddevcast.com for the latest updates (that will be up soon too!).

    Read the article

  • Load order in XNA?

    - by marc wellman
    I am wondering whether the is a mechanism to manually control the call-order of void Game.LoadContent() as it is the case with void Game.Draw(GameTime gt) by setting int DrawableGameComponent.DrawOrder ? except the order that results from adding components to the Game.Components container and maybe there exists something similar with Game.Update(GameTime gt) ? UPDATE To exemplify my issue consider you have several game components which do depends to each other regarding their instantiation. All are inherited from DrawableGameComponent. Now suppose that in one of these components you are loading a Model from the games content pipeline and add it to some static container in order to provide access to it for other game components. public override LoadContent() { // ... Model m = _contentManager.Load<Model>(@"content/myModel"); // GameComponents is a static class with an accessible list where game components reside. GameComponents.AddCompnent(m); // ... } Now it's easy to imagine that this components load method has to precede other game components that do want to access the model m in their own load method.

    Read the article

  • What are the differences between the Fujitsu ScanSnap S300 and the S1300?

    - by Techboy
    Please can you tell me what the technical differences are between the Fujitsu ScanSnap S300 and the S1300 scanners? S300 = http://www.fujitsu.com/emea/products/scanners/scansnap/tmpl_scanners_scansnap-S300.html S1300 = http://www.fujitsu.com/emea/products/scanners/scansnap/tmpl_scanners_scansnap-S1300.html The only difference I can see is that the S300 has a CCD Image Sensor and the S1300 has a CIS Image Sensor. Other than that, the resolutions, pages per minute and physical sizes of both models are exactly the same. If so, is there a reason why I would want the S1300 instead of the cheaper S300? I just want a scanner for duplex document scanning, for home use (documents, receipts, etc.).

    Read the article

  • Issue in extending webapplication sharepoint

    - by GHIYA
    I have extended a webapplication in a farm. main server vsmoss1 where i did vsmoss1 ->webapplication(80) vm.com -> extended web app(of above one)anonymous WFE server name vsmoss2 WFE server name vsmoss3 i have load balanced it to got to vsmoss2 and vsmoss3 when someone hits vm.com when i hit vm.com it works fine without authentication(shows content query webpart also on my page) I know there is no need to do that but when I hit vsmoss2 and vsmoss3 it shows me error on my content query webpart ....any solution for that? Finding this strange tried this : I closed both extended webapp in vsmoss2 and vsmoss3 result: site is up and running but this time with authentication I closed both extended and main webapplication site in vsmoss2 and vsmoss3 is down I closed main webapplication in vsmoss2 and vsmoss3 site is up and running without authentication Anyone is having idea why this is showing behaviour like this...?

    Read the article

  • Client-side code and UPK

    - by [email protected]
    There is a long running discussion in UPK Development regarding the use of any client side code as part of the end-user playback. By this I mean anything which requires an install including ActiveX controls, browser helper objects, stand-alone applications, things that run in the task bar, etc. We all have grown to love zero-footprint applications over the past ten years, but there are some things which are not technically feasible using HTML alone. One example of this is the functionality provided by our SmartHelp in-application support component. This allows the user to launch context sensitive help without making modifications to the target application. (If you are unfamiliar with SmartHelp, more information can be found in the "In-Application Support Guide" in the UPK manual directory) We always try to implement everything we can using only HTML but there are many features which have been requested over the years that would require some client-side code in order to work. When these come up for discussion, there is always a spirited debate about the acceptability of a client side solution. I thought it would be interesting to ask for feedback from a wider audience. What do you think about client-side components? Would your organization consider them? Do you already deploy SmartHelp? Is there a large hurdle to clear for these to be worth the deployment costs? Let us know. Mark Overton, VP Development for UPK

    Read the article

  • Oracle Enterprise Manager Extensibility News - June 2014

    - by Joe Diemer
    Introducing Extensibility Exchange Version 2 On the heals of Enterprise Manager 12c Release 4 this week comes version 2.0 of the Extensibility Exchange.  A new theme allows optimal viewing on a number of different computing devices from large monitor displays to tablets to smartphones.   One of the first things you'll notice is a scrollable banner with the latest news related to Enterprise Manager and extensibility.  Along with the "slider" and the latest entries from Oracle and the Partner community, new features like a tag cloud and an auto-complete search box provide a better way to find the plug-in, connector or other Enterprise Manager entity you are looking for.  Once you find it, a content details page with specific info related to that particular entity will enable you to access it at the provider's site and also rate and comment on that particular item. You can also send an email from the content details page which is routed to the developer.   And if you want to use version 1 of the Extensibility Exchange instead, you will be able to do so via the "Classic" option.  Check it out today at http://www.oracle.com/goto/emextensibility. Recent Additions from Oracle's Partner Community A number of important 3rd party plug-ins have been contributed by Oracle's partner community, which can be accessed via the Extensibility Exchange or by clicking the links in this blog: Dell Open Manage Fusion I-O ION Accelerator NetApp SANtricity E-Series PostgreSQL by Blue Medora You can also check out the following best practices and labs available via the Exchange: Riverbed Stingray Traffic Manager Reference Architecture Datavail Alert Optimizer Custom Templates Apps Associates' Oracle Enterprise Manager "Test Drives" for Oracle Database 12c Management Oracle Enterprise Manager Monitoring Essentials Oracle Application Management Suite for Oracle E-Business Suite

    Read the article

  • How to solve 404 for static files with Django and Nginx?

    - by Lucio
    I setup a Trusty VM with Django + Nginx (other stuffs too). The server is working, I get the "Welcome to Django" page. But when I enter to servername/admin it loads the HTML page but fails to load the static content. And the admin page have this links to static content: <link rel="stylesheet" type="text/css" href="/static/admin/css/base.css" /> <link rel="stylesheet" type="text/css" href="/static/admin/css/login.css" /> Both of the CSS files give me 404, as the Nginx log shows: 192.168.56.1 - - [05/Jun/2014:12:04:09 -0300] "GET /admin HTTP/1.1" 301 5 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0" 192.168.56.1 - - [05/Jun/2014:12:04:09 -0300] "GET /admin/ HTTP/1.1" 200 833 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0" 192.168.56.1 - - [05/Jun/2014:12:04:10 -0300] "GET /static/admin/css/base.css HTTP/1.1" 404 142 "http://ubuntu-server/admin/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0" 192.168.56.1 - - [05/Jun/2014:12:04:10 -0300] "GET /static/admin/css/login.css HTTP/1.1" 404 142 "http://ubuntu-server/admin/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:29.0) Gecko/20100101 Firefox/29.0" I think that the error is on my nginx.conf file, but do not know how to solve it.

    Read the article

  • Handling site not found and page not found with dynamic mass virtual hosting

    - by Rick Moynihan
    I have recently setup mass virtual hosting in Apache so that all we need to do is create a directory to create a new vhost. We're then also using wildcard DNS to map all subdomains to the server running our Apache instance. This works excellently, however I'm now having trouble configuring it to fail-over to an appropriate default/error-page when the vhost directory does not exist. The problem appears to be conflated between by my desire to handle the two error conditions: vhost not found i.e. there was no directory found matching the host supplied in the HTTP host header. I'd like this to display an appropriate site not found error page. The 404 page not found condition of the vhost. Additionally I have a specialised "api" vhost in its own vhost block. I've tried a number of variations and none seem to exhibit the behaviour I want. Here's what I'm working with right now: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /var/www/site-not-found ServerName sitenotfound.mydomain.org ErrorDocument 500 /500.html ErrorDocument 404 /500.html </VirtualHost> <VirtualHost *:80> ServerName api.mydomain.org DocumentRoot /var/www/vhosts/api.mydomain.org/current # other directives, e.g. setting up passenger/rails etc... </VirtualHost> <VirtualHost *:80> # get the server name from the Host: header UseCanonicalName Off VirtualDocumentRoot /var/www/vhosts/%0/current # other directives ... e.g proxy passing to api etc... ErrorDocument 404 /404.html </VirtualHost> My understanding is that the first vhost block is used as the default, so I have this here as my catch all site. Next I have my API vhost, and then finally my mass vhost block. So for a domain that doesn't match the first two ServerName's and has no corresponding directory in /var/www/vhosts/ I'd expect it to fall-over to the first vhost, however with this setup, all domains resolve to my default site-not-found. Why is this? By putting the mass-vhost block first, I can get the mass-vhosts to resolve properly, but not my site-not-found vhost... and in this case I can't seem to find a way to distinguish between a page-level 404 in the vhost, and the case where the VirtualDocumentRoot fails to find a vhost directory (this appears to use the 404 also). Any help out of this bind is much appreciated!

    Read the article

  • How to Implement Complex Form Data?

    - by SoulBeaver
    I'm supposed to implement a relatively complex form that looks like follows, but has at least four more pages requiring the user to fill in all necessary information for the tracks: This data will need to be sent to the server, which is implemented using Dropwizard. I'm looking for best practices on how to upload and send such a complex form with potentially dozens of songs to the server. The simplest available solution I have seen is a simple multipart/form-data request with the following form schema (Source): Client <html> <body> <h1>File Upload with Jersey</h1> <form action="rest/file/upload" method="post" enctype="multipart/form-data"> <p> Select a file : <input type="file" name="file" size="45" /> </p> <input type="submit" value="Upload It" /> </form> </body> </html> Server @POST @Path("/upload") @Consumes(MediaType.MULTIPART_FORM_DATA) public Response uploadTrack(final FormDataMultiPart multiPart) { List<FormDataBodyPart> artists = multiPart.getFields("artist"); StringBuffer output = new StringBuffer(); for (FormDataBodyPart artist : artists) output.append(artist.getValueAs(String.class)); List<FormDataBodyPart> tracks = multiPart.getFields("track"); for (FormDataBodyPart track : tracks) writeToFile(track.getValueAs(InputStream.class), "Foo"); return Response.status(200).entity(output.toString()).build(); } Then I have also read about file uploads via Ajax or Formdata (Mozilla HttpRequest) which allows for Posts in the formats application/x-www-form-urlencoded, multipart/form-data, or text/plain. I don't know which approach, if any, is best. An ideal solution would be to utilize Jackson to convert a json string into my data objects, but I don't get the impression that this is possible with binary data.

    Read the article

  • What is the proper SEO handling of pages appearing in popups using IFRAMEs?

    - by Alexis Wilke
    I am working on a CMS which makes use of IFRAMEs to display some forms, for illustration, say a Search form. So the user clicks the Search button and the website reacts by opening a popup window which includes an IFRAME to the actual Search form. This means I have a "bare"¹ page with the search form. Page which, obviously, is directly accessible via its own URI. In terms of SEO, the forms have no content worthy of being indexed, so I was thinking to mark them as NOINDEX. Is that the correct way to handle such pages? From what I read on some other question, Google suggests to put links from IFRAMEs to other pages. However, I definitively do not want a user visible link to the Home page, or whatever page in link with the form, in the content of my forms because that could be misinterpreted by the user. However, if <link> tags would work too, which one should I use? (i.e. "top" would work, right? with the home page in there?) ¹ By bare I mean that the normal theme is not show, it will be a plain white background with just and only the simplest form.

    Read the article

  • jQuery "Auto Post-back" Select/Drop-Down List

    - by Doug Lampe
    I have one common piece of jQuery code which I use to submit a form any time the selection changes on a drop-down list (HTML select tag).  This is similar to setting AutoPostBack = true in ASP.Net.  I use a single CSS class (autoSubmit) to annotate that I want the drop-down to force the form to submit on change so the HTML looks something like this: <select id="myAutoSubmitDropDown" name="myAutoSubmitDropDown" class="autoSubmit">     <option value="1">Option 1</option>     <option value="2">Option 2</option> </select> Then the following jQuery will look for any element with this CSS class and submit the parent form when the value is changed: function wireUpAutoSubmit() {   $(".autoSubmit").each(function (index) {     $(this).change(function () {       $(this).closest('form').submit();     })   }); } I put this in a separate function since I might need to wire this up explicitly after an ajax call.  Therefore I use the following code to set this method to fire when the DOM is loaded: $(document).ready(function () {   wireUpAutoSubmit(); });

    Read the article

  • How to convert eps file to a large jpeg image

    - by Anand
    Hello, I am using Linux. I want to convert an eps file to jpeg file. I find that I can use "convert" command. However, the resulting image looks very small. I want to enlarge the jpeg file by -resize option. It seems not to work. The resulting image is a pure black one. Do anyone has the same problem? Here are more details: 1: if I use convert -scale 1000x1000 your.eps your.jpg the resulting image looks like a low quality image. The eps vector image is not scaled properly. 2: if I use convert -geometry 300% your.eps your.jpg I get a pure black image. Here is my phf file: 2shared.com/document/RXl2Be-g/askquestions.html and my eps file: 2shared.com/file/qrmwKegj/askquestions.html Thank you very much for your help!

    Read the article

  • Forwarding a subdomain to main domain using Godaddy

    - by Ryan Hayes
    I have current blog, which was hosted on Tumblr at http://blog.ryanhayes.net. I'm moving it over to http://ryanhayes.net, and have all the 301 redirects set up for the blog entries to map to my new blog, which is hosted using Godaddy (domain included). When I try to set up a subdomain forward, I'm greeted with a nice 403 Forbidden response (as of this writing, you can see it at http://blog.ryanhayes.net. When I try to ping both the subdomain and domain, they point to the same IP address, so I know blog subdomain has at least switched over to point to the same content. I don't really understand why I would get a 403 Forbidden on the same content that I can see perfectly fine via another domain. Currently, I have a CNAME of blog pointing to @, which is how "www" is set up to forward, so I'm assuming it would do the same thing. My question is what is the proper way to set up my DNS to make the blog subdomain forward to my main domain (301) using the GoDaddy DNS manager? Bonus: What is the background on why I am getting a 403 error the current way? Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. UPDATE 12/7/2010 Error on site has been fixed, you can no longer view it from my site.

    Read the article

  • rlwrap for wlst

    - by john.graves(at)oracle.com
    After reading Gilles’s post on using rlwrap for sql: http://blogs.oracle.com/xpsoluxdb/2011/03/bash-like_features_in_sqlplus_rman_and_other_oracle_command_line_tools.html It was obvious this would also be good for wlst. . $WL_HOME/server/bin/setWLSEnv.sh rlwrap -f wlst.words --multi-line java weblogic.WLST Here is my wlst.words file: http://blogs.oracle.com/johngraves/code/wlst.words .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • Modify database for the SharePoint 2010 Enterprise Search administration web site

    - by Mark Hall
    Does anyone know how to modify the database settings for the Enterprise Search administration web site? When you configure the service application via Central Administration, SharePoint just decides to use the default database server and gives a name like Enterprise_Search_DB_Identifier. I want to modify this to atleast give a name that makes scense like SharePoint_Search_AdministrationWebContent, and it might be nice to move it to the database server that is hosting the crawl and property database. I figured out how to move the Central Administration web content database, but this database is not listed as a content database. It is listed as a Microsoft.Office.Server.Search.Administration.SearchAdminDatabase. I have not tested to see if the same process would work but because you are doing a RemoveContentDatabase and NewContentDatabase, I would assume not. Any help would be appreciated.

    Read the article

  • Why won't SSI work in IIS?

    - by Josh Kodroff
    I can't get IIS to respect my SSI directives - it just outputs the #include directive as if it were regular old html. Here's the relevant data points: My file with the include directive is called index.html This is my directive: <!-- #include file = "header.shtml" --> (it doesn't work with virtual either.) The file being requested is in the same directory as the file being #include-ed. The SSI module is installed. The SSINC-shtml handler mapping is present and enabled. I think it might be some sort of permissions issue (read/write/execute), but I don't know where those settings are in IIS 7.5.

    Read the article

  • Suggested Resources Visual Studio Plug-In

    Todays post is a quick plug for a new tool developed by my friend Olaf Conijn, who (amongst other things) has been a developer on several versions of Enterprise Library. His new tool is called Suggested Resources for .NET Developers, and the current 0.8 release works with both Visual Studio 2008 and Visual Studio 2010. So what does it do? Well heres what Olaf has to say: This Visual Studio Integration Package is a proof of concept in: Aggregation of online content within the Visual Studio IDE. Analysis of development activities within the Visual Studio IDE. This combination of features allows Suggested Resources for .NET developers to pro-actively suggest online content that applies on the task being performed by a developer... A bit like having a programming pair that searches for online resources while you focus on getting the job done. For more info, screenshots and downloads, head to the Codeplex project site or the Visual Studio Gallery page.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 776 777 778 779 780 781 782 783 784 785 786 787  | Next Page >