Search Results

Search found 50062 results on 2003 pages for 'http get'.

Page 536/2003 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • htaccess correct, Apache logs still showing the evil visitors with 200 code

    - by bulgin
    I hope someone can help me. Please take a look at the following snippet of Apache logs: 95-169-172-157.evilvisitor.com - - [12/Nov/2012:09:46:02 -0500] "GET /the-page-I-dont-want-to-deliver.html HTTP/1.1" 200 9171 "http://hackers.ru/" "Mozilla/4.0 (MSIE 6.0; Windows NT 5.1; Search)" I have the following included in my .htaccess for the root directory of the website and there are no other .htaccess files anywhere that would affect this: RewriteEngine On Options +FollowSymLinks ServerSignature Off ErrorDocument 403 "Nothing Interesting Here" order allow,deny deny from evilvisitor.com deny from hackers.ru deny from anonymouse.org allow from all I also have GeoIP functioning properly and have this included there: #for stuff from different countries RewriteCond %{ENV:GEOIP_COUNTRY_CODE} ^(UA|TR|RU|RO|LV|CZ|IR|HR|KR|TW|NO|NL|NO|IL|SE) RewriteRule ^(.*)$ [R=F,L I know this works because whenever I attempt to access the website from a proxy in say, Spain, I get the error message. I also know it works because when accessing the website from anonymouse.org, the proper error code page is displayed. So then why am I still getting these visitors who successfully access the page I don't want them to see with an Apache 200 code when it should be an error code?

    Read the article

  • DDDNorth2 Bradford, 13th October 2012 - Async Patterns presentation and source code

    - by Liam Westley
    Many thanks to Andy Westgarth and his team for organising a fantastic conference at the rather elegant Bradford University School of Management. Also, a big congratulations to all the delegates who gave up there free time to come and hear us speak and who were, in general, enthusiastic and asked some cracking questions to keep us speakers on our toes. For those who attended my Async my source code and presentation are now available on GitHub, https://github.com/westleyl/DDDNorth2-AsyncPatterns If you are new to Git then the easiest client to install is GitHub for Windows, a graphical UI for accessing GitHub. Personally, I also have TortoiseGit installed – the file explorer add-in that works in a familiar manner to TortoiseSVN. As I mentioned during the presentation I have not included the sample data, the music files, in the source code placed on GitHub but I have included instructions on how to download them from http://silents.bandcamp.com and place them in the correct folders. What I forgot to mention is that Windows Media Player by default does not play Ogg Vorbis and Flac music files, however you can download the codec installer for these, for free, from http://xiph.org/dshow. I am planning to break down this little project into a series of blog posts, with each pattern being a single blog post over several weeks. In these I will flesh out the background behind the pattern, the basic goal being achieved and how to monitor the progress of the sample data being processed. Basically, what I said during the presentation and is missing from the slides.

    Read the article

  • DevWeek & SQL Social @ London

    - by Davide Mauri
    Yesterday I had my “SQL Server best practices for developers” session at DevWeek and I really enjoyed it a lot. For all those who asked, I’ll put slides and demos online as soon as possible. I’ve just waiting to know where I can put it (on my website or somewhere else), so it should be just a matter of some days. If you attended my session and would like to rate it, please use SpeakerRate here: http://speakerrate.com/talks/2857-sql-server-best-practices-for-developers I also have to thank Simon Sabin for the very nice event he organized for SQLSocial http://sqlblogcasts.com/blogs/simons/archive/2010/02/16/SQLSocial-presents-Itzik-Ben-gan--Greg-Low-and-Davide-Mauri.aspx A lot of people attended and we really had interesting discussions. And it was my first time doing a session at a pub, and I must say it's *really* funny and enjoyable, expecially when you have free beer :-) Now back to Italy to the “usual” work! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • WebLogic Partner Community Newsletter August 2012

    - by JuergenKress
    Dear WebLogic partner community member Thanks to all attendees and trainers for their participation in our Fusion Middleware Summer Camps held in Lisbon and Munich. I would also like to thank you for your great feedback and the nice reports shared with us by AMIS Technology Blog & Middleware by Link Consulting. Most of our courses have been overbooked. If you did not get a chance to attend it or missed it, we offer a wide range of online training and the course material. Key take-away from the advanced BPM course is to become an expert in ADF. Here is the course from Grant Ronald on Learn Advanced ADF online available. In addition to this we continue our WebLogic 12c bootcamps at various locations across Europe. Please click here for more details. The latest set of customer meetings presentations are available on our community workspace. Please feel free to access them. We have also updated ExaLogic kit with additional whitepapers and training material. Please feel free to contact us if you are working on an ExaLogic 2.01 implementation! Tuxedo 12c, the next product of our Fusion Middleware 12c product family is now available. Enjoy your summer! Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsAugust2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • WebLogic Partner Community Newsletter August 2012

    - by JuergenKress
    Dear WebLogic partner community member Thanks to all attendees and trainers for their participation in our Fusion Middleware Summer Camps held in Lisbon and Munich. I would also like to thank you for your great feedback and the nice reports shared with us by AMIS Technology Blog & Middleware by Link Consulting. Most of our courses have been overbooked. If you did not get a chance to attend it or missed it, we offer a wide range of online training and the course material. Key take-away from the advanced BPM course is to become an expert in ADF. Here is the course from Grant Ronald on Learn Advanced ADF online available. In addition to this we continue our WebLogic 12c bootcamps at various locations across Europe. Please click here for more details. The latest set of customer meetings presentations are available on our community workspace. Please feel free to access them. We have also updated ExaLogic kit with additional whitepapers and training material. Please feel free to contact us if you are working on an ExaLogic 2.01 implementation! Tuxedo 12c, the next product of our Fusion Middleware 12c product family is now available. Enjoy your summer! Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsAugust2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Tracking Outgoing Links With Google Analytics Events

    - by the_archer
    I've been trying to track clicks on external links on my website using the events tracking method. So I've got my Google Analytics code setup before body ends as shown below (note: quotes have been entitied by blogger, but it works fine): <script type='text/javascript'> var _gaq = _gaq || []; _gaq.push([&#39;_setAccount&#39;, &#39;UA-XXXXXXX-X#39;]); _gaq.push([&#39;_trackPageview&#39;]); (function() { var ga = document.createElement(&#39;script&#39;); ga.type = &#39;text/javascript&#39;; ga.async = true; ga.src = (&#39;https:&#39; == document.location.protocol ? &#39;https://ssl&#39; : &#39;http://www&#39;) + &#39;.google-analytics.com/ga.js&#39;; var s = document.getElementsByTagName(&#39;script&#39;)[0]; s.parentNode.insertBefore(ga, s); })(); </script> Now I wanted to track a link on the addthis.com follow widget. So there is a link of the type below to which following instructions from here I added the onclick event. <a addthis:url='http://feeds.feedburner.com/myfeedburnerlurl' onClick="_gaq.push(['_trackEvent', 'Subscription Clicks', 'RSS']);" class='addthis_button_rss_follow'/> I clicked on it a couple of times, left it for over a day now, but nothing shows up in google analytics events. It just says zero events. Here's a screenshot of the events page on GA: Could anybody help me? Am I doing anything wrong?

    Read the article

  • crippling repeating "pciehp card not present" notifications

    - by Nanne
    When using ubuntu (12.04, both installed and on a live usb) I get a lot of these messages: pciehp 0000:00:1c.5:pcie04: Card not present on Slot(37) pciehp 0000:00:1c.5:pcie04: Card present on Slot(37) And with a lot I mean about 20 per second. This has a crippling effect, and I would like to get rid of it :) The computer is a packard bell easynote BG48-U-100 DC. I tip I picked up from some fedora/redhat error here was to look at lspci -vnn. I have pasted the part about "00:1c.5" here: http://pastebin.com/0sfsiqW2 For what good it may do, here is the lsmod of my machine: http://pastebin.com/DQZy1kAL From that first pastebin I think to conclude that it has to do with the module shpchp, which seems to me (aka: google) to have something to do with ACPI. That's as far as I've come in disecting this. Can anyone help me along further? What can I do, check etc? I did see this topic but my intentions are not to surpress the error message: I know how to do this (from that topic ;) ), but I'm looking for a real sollution. Finding the problem on the internet does suspect me to believe it is neither an ubuntu specific nor a packard-bell specific problem.If you google the problem it seems that is present on several other distribution/hardware combo's as well, and it looks like the advice is to remove one of the drivers? I have no clue as to which driver I should look at and and what would be the effect of just removing it. I have seen this topic which is old-ish, but describes my problem and is about a similar computer. The solution in this topic was to compile a new kernel using a spanish guide, which seems a bit extreme to me, so I'm kinda hoping for a better solution than that.

    Read the article

  • How can I avoid a 302 for Fetch as Bot?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • DataContractSerializer truncated string when used with MemoryStream,but works with StringWriter

    - by Michael Freidgeim
    We've used the following DataContractSerializeToXml method for a long time, but recently noticed, that it doesn't return full XML for a long object, but  truncated it and returns XML string with the length of  multiple-of-1024 , but the reminder is not included. internal static string DataContractSerializeToXml<T>(T obj) { string strXml = ""; Type type= obj.GetType();//typeof(T) DataContractSerializer serializer = new DataContractSerializer(type); System.IO.MemoryStream aMemStr = new System.IO.MemoryStream(); System.Xml.XmlTextWriter writer = new System.Xml.XmlTextWriter(aMemStr, null); serializer.WriteObject(writer, obj); strXml = System.Text.Encoding.UTF8.GetString(aMemStr.ToArray()); return strXml; }   I tried to debug and searched Google for similar problems, but didn't find explanation of the error. The most closed http://forums.codeguru.com/showthread.php?309479-MemoryStream-allocates-size-multiple-of-1024-( talking about incorrect length, but not about truncated string.fortunately replacing MemoryStream to StringWriter according to http://billrob.com/archive/2010/02/09/datacontractserializer-converting-objects-to-xml-string.aspxfixed the issue.   1: var serializer = new DataContractSerializer(tempData.GetType());   2: using (var backing = new System.IO.StringWriter())   3: using (var writer = new System.Xml.XmlTextWriter(backing))   4: {   5:     serializer.WriteObject(writer, tempData);   6:     data.XmlData = backing.ToString();   7: }v

    Read the article

  • Cannot Create Bootable USB Drive from .iso file

    - by tarabyte
    I've tried formatting the flash drive as FAT as well as Mac OS journaled through diskutility but still cannot successfully create a bootable drive. I'm following the directions here exactly: http://www.ubuntu.com/download/help/create-a-usb-stick-on-mac-osx Environment: Macbook Pro trying to create a bootable flash drive for a Macbook Pro. 8GB flash drive. Tested ubuntu-12.04.1 as well as ubuntu 12.20 .iso 64-bit downloads. Nothing to repair in disk utility for this hard drive. Every time I finish step 8 of the tutorial I get "file system not recognized" with the options to "initialize" meaning to reformat my drive, "ignore" or "eject." When I try to re-inspect the flash drive in disk utility after plugging it back in I see that it has some error when I try to verify it but the "repair" button is disabled. I just want to boot to ubuntu when my mac first starts up. Oh the pain. http://lifehacker.com/5934942/how-to-dual-boot-linux-on-your-mac-and-take-back-your-powerhouse-apple-hardware "linux is free insomuch as your time is worthless" - old wise man

    Read the article

  • Tumblr custom domain not redirecting properly

    - by Manic
    I decided to host my blog at Tumblr, using their custom domain setup (http://blog.smokingfishgames.com/ instead of http://smokingfishgames.tumblr.com). However, it's been 72 hours and I'm still getting spotty redirection. It works some of the time--I go and see the page and blog, and it's all fine. However, it occasionally just stops working and redirects back to my web host, which is a directory with nothing but a single file called BUGGER.html (which I stuck in to make sure that it was my web host and not some Tumblr empty directory). Clearing the Chrome DNS cache makes the problem go away--for a while. After a few minutes, or an hour, or however long, I'll start seeing BUGGER.html again. I clear the cache, and poof, the blog shows up. The thing that's curious to me is that when I clear the cache and get BUGGER.html again (which happens occasionally), I can look at my Chrome DNS cache and see assets.tumblr.com UNSPECIFIED blog.smokingfishgames.com UNSPECIFIED www.tumblr.com UNSPECIFIED IP addresses and expiration times omitted for brevity's sake--if they're important I'm sure I can replicate the issue. This implies, to me anyway, that my browser is reaching Tumblr but getting bounced back to my web host. Any reason why this would be happening, or is this a normal symptom of DNS propagation? If it is a problem, should I be bothering Tumblr or my host with it, or is this something I can fix myself?

    Read the article

  • PHP W3 Validator API, Is this good? [closed]

    - by Josh Purcell
    I was trying to find a way to see if my site's code was valid or not but I continuously going over to W3 Validator so I decided to make an "API" however it really isn't! I just wanted to know if anybody can find a better solution to the one I have made. This is what I currently use, with the usage of ?uri=http://www.mydomain.com : <?php if(!$_GET['uri']) { echo "No URI!"; } else { $CheckURI = "http://validator.w3.org/check?uri=".$_GET['uri']; $URL = file_get_contents($CheckURI); $Start = strpos($URL, "<title>") + 7; $End = strpos($URL, "</title>"); $Title = substr($URL, $Start, $End-$Start); if(preg_match('[Invalid]',$Title)) { //Code is INVALID echo "<a href='$CheckURI' title='This is not good!' target='_BLANK'>INVALID Source</a>"; } elseif(preg_match('[Valid]',$Title)) { //Code is VALID echo "<a href='$CheckURI' title='Check It Yourself!' target='_BLANK'>Valid Source</a>"; } else { //It Went WRONG echo ""; } }

    Read the article

  • Learning OpenGL GLSL - VAO buffer problems?

    - by Bleary
    I've just started digging through OpenGL and GLSL, and now stumbled on something I can't get my head around this one!? I've stepped back to loading a simple cube and using a simple shader on it, but the result is triangles drawn incorrectly and/or missing. The code I had working perfectly on meshes, but was attempting to move to using VAOs so none of the code for storing the vertices and indices has changed. http://i.stack.imgur.com/RxxZ5.jpg http://i.stack.imgur.com/zSU50.jpg What I have for creating the VAO and buffers is this //Create the Vertex array object glGenVertexArrays(1, &vaoID); // Finally create our vertex buffer objects glGenBuffers(VBO_COUNT, mVBONames); glBindVertexArray(vaoID); // Save vertex attributes into GPU glBindBuffer(GL_ARRAY_BUFFER, mVBONames[VERTEX_VBO]); // Copy data into the buffer object glBufferData(GL_ARRAY_BUFFER, lPolygonVertexCount*VERTEX_STRIDE*sizeof(GLfloat), lVertices, GL_STATIC_DRAW); glEnableVertexAttribArray(pos); glVertexAttribPointer(pos, 3, GL_FLOAT, GL_FALSE, VERTEX_STRIDE*sizeof(GLfloat),0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mVBONames[INDEX_VBO]); glBufferData(GL_ELEMENT_ARRAY_BUFFER, lPolygonCount*sizeof(unsigned int), lIndices, GL_STATIC_DRAW); glBindVertexArray(0); And the code for drawing the mesh. glBindVertexArray(vaoID); glUseProgram(shader->programID); GLsizei lOffset = mSubMeshes[pMaterialIndex]->IndexOffset*sizeof(unsigned int); const GLsizei lElementCount = mSubMeshes[pMaterialIndex]->TriangleCount*TRIAGNLE_VERTEX_COUNT; glDrawElements(GL_TRIANGLES, lElementCount, GL_UNSIGNED_SHORT, reinterpret_cast<const GLvoid*>(lOffset)); // All the points are indeed in the correct place!? //glPointSize(10.0f); //glDrawElements(GL_POINTS, lElementCount, GL_UNSIGNED_SHORT, 0); glUseProgram(0); glBindVertexArray(0); Eyes have become bleary looking at this today so any thoughts or a fresh set of eyes would be greatly appreciated.

    Read the article

  • What went wrong with my curl install?

    - by Danjah
    I'm fresher than the prince to Linux, I've been following the instructions here: http://chrisfulstow.com/running-node-js-on-windows-with-virtualbox-and-ubuntu (the link tells what I am generally trying to do). I'm all up and running in VBox, and am at the curl install part, I may have done the curl part a week ago I forget. So I ran this command anyway: danjah@danjah-VirtualBox:~$ sudo apt-get install curl Result: [sudo] password for danjah: Reading package lists... Done Building dependency tree Reading state information... Done curl is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Then: $ curl http://npmjs.org/install.sh | sudo npm_install=rc sh Result: fetching: { gzip: stdin: unexpected end of file /bin/tar: Child returned status 1 /bin/tar: Error is not recoverable: exiting now It failed Should I be concerned? How can I test curl? How can I avoid these situations? Perhaps there's a generic way of checking to see if I've already installed packages/etc? Case specific answers and general advice most appreciated. cheers, d

    Read the article

  • How (and when) to move users to mysqli and PDO_MYSQL?

    - by cj
    An important discussion on the PHP "internals" development mailing list is taking place. It's one that you should take some note of. It concerns the next step in transitioning PHP applications away from the very old mysql extension and towards adopting the much better mysqli extension or PDO_MYSQL driver for PDO. This would allow the mysql extension to, at some as-yet undetermined time in the future, be removed. Both mysqli and PDO_MYSQL have been around for many years, and have various advantages: http://php.net/manual/en/mysqlinfo.api.choosing.php The initial RFC for this next step is at https://wiki.php.net/rfc/mysql_deprecation I would expect the RFC to change substantially based on current discussion. The crux of that discussion is the timing of the next step of deprecation. There is also discussion of the carrot approach (showing users the benfits of moving), and stick approach (displaying warnings when the mysql extension is used). As always, there is a lot of guesswork going on as to what MySQL APIs are in current use by PHP applications, how those applications are deployed, and what their upgrade cycle is. This is where you can add your weight to the discussion - and also help by spreading the word to move to mysqli or PDO_MYSQL. An example of such a 'carrot' is the excellent summary at Ulf Wendel's blog: http://blog.ulf-wendel.de/2012/php-mysql-why-to-upgrade-extmysql/ I want to repeat that no time frame for the eventual removal of the mysql extension is set. I expect it to be some years away.

    Read the article

  • System wide Proxy settings when on a windows network with a password

    - by sav
    I'm using Ubuntu on a windows network. I want to connect to the world wide web. I have followed the steps here which I have found very useful. However when I try to ping a website (eg: ping www.wikipedia.org) I get no reply. I can ping local computers on my network, but I need to go through our proxy to get to the world wide web. I can even browse wikipedia using firefox, I just needed to enter the proxy configuration script location and my username and password. I'm quite sure the reason I'm having this trouble is because I havn't entered a username and password. I'm not sure how to do this on a system wide level. ultimately I would like to be able to use package managers like synaptic but first I need them to be able to connect to the internet. EDIT As sugested I created a /etc/apt/apt.conf file like Acquire::http::Proxy "http://chrisav:[email protected]:8080"; Acquire::https::Proxy "https://chrisav:[email protected]:8080"; Acquire::ftp::Proxy "ftp://chrisav:[email protected]:8080"; Acquire::socks::Proxy "socks://chrisav:[email protected]:8080"; However I still cant ping wikipedia when I try installing stuff I get chris@chris-Ubuntu:~$ sudo apt-get install kate Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package kate

    Read the article

  • Define a Policykit action for Upstart

    - by gentakojima
    I'm trying to get some user to start, stop and reload services by using the udev Upstart interface. I'm using Ubuntu 10.04. I created the file /usr/share/polkit-1/actions/com.ubuntu.Upstart0_6.Job.policy <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE policyconfig PUBLIC "-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN" "http://www.freedesktop.org/standards/PolicyKit/1.0/policyconfig.dtd"> <policyconfig> <vendor>UXITIC</vendor> <vendor_url>http://citius.usc.es/</vendor_url> <icon_name>gdm</icon_name> <action id="com.ubuntu.Upstart0_6.Job.Stop"> <_description>Stops a service</_description> <_message>Privileges are required to stop a service.</_message> <defaults> <allow_inactive>no</allow_inactive> <allow_active>auth_admin_keep</allow_active> </defaults> </action> </policyconfig> And also the corresponding authority file /etc/polkit-1/localauthority/50-local.d/10-allow-users-to-start-and-stop-services.pkla [Stop services] Identity=unix-user:jorge Action=com.ubuntu.Upstart_0.6.Job.Stop ResultAny=no ResultInactive=no ResultActive=auth_self But when I try to stop a service, I get Access Denied :( jorge$ dbus-send --print-reply --system --dest=com.ubuntu.Upstart /com/ubuntu/Upstart/jobs/ssh com.ubuntu.Upstart0_6.Job.Stop array:string:'' boolean:true Error org.freedesktop.DBus.Error.AccessDenied: Rejected send message, 1 matched rules; type="method_call", sender=":1.452" (uid=1021 pid=28234 comm="dbus-send) interface="com.ubuntu.Upstart0_6.Job" member="Stop" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init"))

    Read the article

  • Implementing the NetBeans Project API on Maven in IntelliJ IDEA

    - by Geertjan
    James McGivern, one of the speakers I met at JAX London, is creating media software on the NetBeans Platform. However, he's using Maven and IntelliJ IDEA and one of the features he needs is project support, i.e., the project infrastructure that's part of NetBeans IDE. The two documents that describe the NetBeans Project API are these: http://platform.netbeans.org/tutorials/nbm-projecttype.html http://netbeans.dzone.com/how-create-maven-nb-project-type By combining the above two, you'll understand how to create a project infrastructure on top of the NetBeans Platform with Maven. However, an additional step of complexity is added when IntelliJ IDEA is included into the mix and therefore I created the following screencast which, in 15 minutes, puts all the pieces together. Be aware that I'm probably not using IntelliJ IDEA and Maven as optimally as I could and I'm publishing this at least partly so that the errors of my ways can be pointed out to me. But, first and foremost, this is especially for you James:  Note: Intentionally no sound, only callouts explaining what I'm doing. You'll probably need to pause the movie here and there to absorb the text; for details on the text, see the two links referred to above.

    Read the article

  • Any advantage to the script version of Google Adwords' conversion tracking code?

    - by ripper234
    Google Adword has an HTML snippet to track conversions: <script type="text/javascript"> /* <![CDATA[ */ var google_conversion_id = 12345; var google_conversion_language = "en"; var google_conversion_format = "3"; var google_conversion_color = "ffffff"; var google_conversion_label = "someopaqueid"; var google_conversion_value = 0; /* ]]> */ </script> <script type="text/javascript" src="http://www.googleadservices.com/pagead/conversion.js"> </script> <noscript> <div style="display:inline;"> <img height="1" width="1" style="border-style:none;" alt="" src="http://www.googleadservices.com/pagead/conversion/12345/?label=opaque&amp;guid=ON&amp;script=0"/> </div> </noscript> It is composed of two parts: For clients supporting javascript, an inline script that sets variables, plus loading a reporting script. For other clients, an image tag. As far as I can see, the image tag has some advantages: It works on all browsers. It is asynchronous. It's shorter to have only this version, compared to both this and the js version. Any reason not to drop the <noscript> tag and just use the image conversion snippet directly?

    Read the article

  • Oracle Magazine Sept/Oct 2012 - Security on the Move

    - by Darin Pendergraft
    This month's Oracle Magazine cover story is Security on the Move.  In it, two Oracle IDM customers discuss their impressions of the latest IDM release.  Kurt Lieber from Kaiser Permanente and Peter Boyle from BT discuss how they are using Oracle IDM to enable their business. Click this link to see the latest issue: http://www.oracle.com/technetwork/issue-archive/2012/12-sep/index.html Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In addition to the cover article, the Analyst’s Corner features an interview with Sally Hudson from IDC focusing on IDM issues : http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52analyst-1735921.html And the Partner Perspectives contains information from our IDM partners Hub City Media, aurionPro SENA, and ICSynergy

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • wget has a 4 second delay

    - by guisius
    Hello. I have tried to wget a page with windows/mac, and the response is instant while the linux vesion needs to wait for 4 seconds before it shows the response. I just hope this can be solved. More information added: in Ubuntu : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:21:17-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:21:22 (1.88 MB/s) - `test.txt' saved [17] while in Mac OS : wget xxx://192.168.0.135/test.cgi?cmd= -O test.txt --2011-03-04 14:22:33-- xxx://192.168.0.135/test.cgi?cmd= Connecting to 192.168.0.135:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `test.txt' [ <=> ] 17 --.-K/s in 0s 2011-03-04 14:22:33 (755 KB/s) - `test.txt' saved [17] in ubuntu it delays 4 seconds while windows and mac will not i believe it may related to some setting in the network config such as packet size , window frame , but i have no idea to set this PS: because the limit of the post not allow to post the url so i mark this as xxx

    Read the article

  • SSIS Catalog: How to use environment in every type of package execution

    - by Kevin Shyr
    Here is a good blog on how to create a SSIS Catalog and setting up environments.  http://sqlblog.com/blogs/jamie_thomson/archive/2010/11/13/ssis-server-catalogs-environments-environment-variables-in-ssis-in-denali.aspx Here I will summarize 3 ways I know so far to execute a package while using variables set up in SSIS Catalog environment. First way, we have SSIS project having reference to environment, and having one of the project parameter using a value set up in the environment called "Development".  With this set up, you are limited to calling the packages by right-clicking on the packages in the SSIS catalog list and select Execute, but you are free to choose absolute or relative path of the environment. The following screenshot shows the 2 available paths to your SSIS environments.  Personally, I use absolute path because of Option 3, just to keep everything simple for myself. The second option is to call through SQL Job.  This does require you to configure your project to already reference an environment and use its variable.  When a job step is set up, the configuration part will require you to select that reference again.  This is more useful when you want to automate the same package that needs to be run in different environments. The third option is the most important to me as I have a SSIS framework that calls hundreds of packages.  The main part of the stored procedure is in this post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx).  But the top part had to be modified to include the logic to use environment reference. CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog] @PackageName NVARCHAR(255) , @ProjectFolder NVARCHAR(255) , @ProjectName NVARCHAR(255) , @AuditKey INT , @DisableNotification BIT , @PackageExecutionLogID INT , @EnvironmentName NVARCHAR(128) = NULL , @Use32BitRunTime BIT = FALSE AS BEGIN TRY DECLARE @execution_id BIGINT = 0; -- Create a package execution IF @EnvironmentName IS NULL BEGIN   EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @use32bitruntime=@Use32BitRunTime; END ELSE BEGIN   DECLARE @EnvironmentID AS INT   SELECT @EnvironmentID = [reference_id]    FROM SSISDB.[internal].[environment_references] WITH(NOLOCK)    WHERE [environment_name] = @EnvironmentName     AND [environment_folder_name] = @ProjectFolder      EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @reference_id=@EnvironmentID,     @use32bitruntime=@Use32BitRunTime; END

    Read the article

  • Creating sitemap for google bot - how to mark dynamic content / dynamic subpages?

    - by ojek
    I have a website that is internet forum. This forum has many categories, and single category page contains alot of subpages with listed threads. This internet forum is brand new, and about a week ago I filled it with few hundred thousands threads. I then looked at google webmasters page to see any changes in indexing, but the index went up from 300 to about 1200, so that means it did not index my added threads (although it added something). This is what my sitemap.xml contains which I uploaded on their website (of course there is a lot more of the code, this is just a snipped for a single category, in my real sitemap file I have all the categories listed as below): <url> <loc>http://mysite.com/Forums/Physics</loc> <changefreq>hourly</changefreq> </url> Now, I would expect google bot to go into http://mysite.com/Forums/Physics, and move through all the subpages with thread links, and then get inside of each thread and index it's content. How can I do this? Also if this will be unclear, I will add a real link to my website.

    Read the article

  • SMTP server to deliver mail to Rails app, how?

    - by Gunchars
    all, this is my first question and I hope I chose the right place to post it. Here's what I need help with: I've been looking for this all day and I'm having a hard time finding a SMTP mail server that would fit the following criteria: lightweight, does one thing and does it good is able to route and deliver local mail to a Rails application The second point could be accomplished in any number of ways. I'm running a VPS, so I have full freedom in how to implement this. It could, for example, put messages straight in the db, pipe them to a helper program that would then process them accordingly or also save messages in a mbox file and run a script after every received message. I'm building a small site so the traffic is not going to be a problem. If there are alternative ways to deliver messages to a Rails app, I'd gladly hear about them. Thank you. EDIT: After long searching, I think I've found what I was looking for. Exim is a mail server that can deliver local mail to pipes. Also, Rails 3 and ActionMailer can make it really easy to process the incoming mail. More info here: http://www.exim.org/exim-html-current/doc/html/spec_html/ch29.html http://guides.rubyonrails.org/action_mailer_basics.html#receiving-emails

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >