Search Results

Search found 24162 results on 967 pages for 'jquery live'.

Page 633/967 | < Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >

  • It's Alive!

    - by Oracle OpenWorld Blog Team
    See what leading-edge, provocative, and fascinating new content will be featured at Oracle OpenWorld in 2012. by Karen Shamban It’s what you’ve been waiting for. The Oracle OpenWorld Content Catalog—the central repository for information on sessions, demos, labs, user groups, exhibitors, and more—is live. Right now. In the Content Catalog you can search on tracks, session types, session categories, keywords, and tags. Or, you can search for your favorite speakers to see what they’re presenting this year. And, directly from the catalog, you can share sessions you’re interested in with friends and colleagues through a broad array of social media channels. Start checking out Oracle OpenWorld content now to plan your week at the conference. Then you’ll be ready to sign up for all of your sessions in mid-July when the scheduling tool goes live. Thinking of cross-registering for JavaOne? The JavaOne Content Catalog is also live at this very minute so you can see what great content is on offer there.

    Read the article

  • GNOME 3.8 stutters / lags

    - by Robin
    I recently installed Fedora 19 beta and noticed unusually laggy behaviour of GNOME Shell (3.8) on the Nouveau driver on a nVidia 9200GS (also with nVidia driver) So I thought this could be related to Fedora so I installed Ubuntu 13.04 which comes with GNOME 3.6. Everything was buttery smooth as it has always been, but once I upgraded GNOME to 3.8, I experience the same laggy performance on Ubuntu as well. I'm talking about the animations, such as the window overview in activities, openSUSE live image with 3.6 = smooth Ubuntu 13.04 live image with 3.6 = smooth Ubuntu 13.04 install with 3.6 = smooth Ubuntu 13.04 install with 3.8 = laggy Fedora live image with 3.8 = laggy Fedora install with 3.8 = laggy Does anyone have a clue what could be going on?

    Read the article

  • Mozilla Firefox 23 Will Block Mixed SSL Content

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/07/03/mozilla-firefox-23-will-block-mixed-ssl-content.aspxIf you have a site which is running on SSL and used content that make non-https request then you need to a bit worried. The default setting of Firefox 23 will block the content that called on non-https address and page is based on SSL. for example script using https://code.jquery.com/jquery-1.10.2.min.js will not work because code.jquery.com can not be reach on https. the cdn ajax.googleapis.com support SSL so you can try it. if you want to disable this settings you can modify it on about:config security.mixed_content.block_active_content change the value true to false and it will be disable (it’s just for example)

    Read the article

  • Does it matter that the TTL is higher than I've been told?

    - by Andy
    Ok, so the title isn't very descriptive, I know. Basically, I'm trying to configure Outlook.com to use my custom domain! I've followed the steps and made the account etc. and now I have the DNS settings to configure from Windows Live. I added the MX Entries and everything last night but Windows Live is still saying I need to prove my ownership of the domain. The only thing I can think of that I had to use a differrent TTL to the one provided because my web hoster will only allow a minimum of four hours, whereas Windows Live told me to configure the TTL as one hour. Would that stop anything? By the way, my web hoster is JustHost (Shared Hosting)

    Read the article

  • setup test domain ready for site launch

    - by nextyear
    I'm about to launch a site on a live server, after developing it with xampp on localhost. I first want to test the site before i make it live. How do i setup this up so I have it in a subdomain (i.e. test.livesite.com)? Is it just set it up on the server and only add the cname dns setting? Or is there a better way? All i am trying to do is add my site to the server, so i can edit it and look at it before I set it as live.

    Read the article

  • Register to the Solaris 11.1 and Solaris Cluster webcast!

    - by Karoly Vegh
    On the 7. November there will be a live webcast about Oracle Solaris 11.1 and Oracle Solaris Cluster that you do not want to miss: the Online Launch Event: Oracle Solaris 11 - Innovations for your Data Center.  This live webcast will have three sessions: Executive Keynote: Oracle Solaris 11 - Innovations for your data center  Oracle Technical Session: Oracle Solaris 11.1  Oracle Technical Session: Oracle Solaris Cluster  There will be a live Q&A session, but feel free to tweet as well with #solaris.  see you there! -- charlie  

    Read the article

  • How an offline main domain can influence traffic on an active sub domain

    - by danie7L T
    The website(s) design is for a company active in 3 different areas. As an example lets use the following structure: www.example.com [sub1.example.com] [sub2.example.com] [sub3.example.com] sub2.example.com and sub3.example.com are ready to go live but www.example.com really isn't and send a 503 http error code. I would like to know if this situation will affect the traffic and ranking of the subdomains ready to go live? Is it preferable to wait and go live with the main domain? Or there is nothing to "fear" and one doesn't affect the other? Thank you

    Read the article

  • Your favorite LiveCD

    - by sova
    Perhaps this is a question better suited for linux.se but since I have servers on all environments I was wondering what your favorite LiveCD is. Wiki defines this, for those unfamiliar, as a CD or DVD containing a bootable computer operating system. Live CDs are unique in that they have the ability to run a complete, modern operating system on a computer lacking mutable secondary storage, such as a hard disk drive. Live USB flash drives are similar to live CDs, but often have the added functionality of automatically and transparently writing changes back to their bootable medium. I tend to stick with Knoppix, and today I have to register the machine on the network before I can get any of my utilities running via IP. Any suggestions?

    Read the article

  • The Windows Store... why did I sign up with this mess again?

    - by FransBouma
    Yesterday, Microsoft revealed that the Windows Store is now open to all developers in a wide range of countries and locations. For the people who think "wtf is the 'Windows Store'?", it's the central place where Windows 8 users will be able to find, download and purchase applications (or as we now have to say to not look like a computer illiterate: <accent style="Kentucky">aaaaappss</accent>) for Windows 8. As this is the store which is integrated into Windows 8, it's an interesting place for ISVs, as potential customers might very well look there first. This of course isn't true for all kinds of software, and developer tools in general aren't the kind of applications most users will download from the Windows store, but a presence there can't hurt. Now, this Windows Store hosts two kinds of applications: 'Metro-style' applications and 'Desktop' applications. The 'Metro-style' applications are applications created for the new 'Metro' UI which is present on Windows 8 desktop and Windows RT (the single color/big font fingerpaint-oriented UI). 'Desktop' applications are the applications we all run and use on Windows today. Our software are desktop applications. The Windows Store hosts all Metro-style applications locally in the store and handles the payment for these applications. This means you upload your application (sorry, 'app') to the store, jump through a lot of hoops, Microsoft verifies that your application is not violating a tremendous long list of rules and after everything is OK, it's published and hopefully you get customers and thus earn money. Money which Microsoft will pay you on a regular basis after customers buy your application. Desktop applications are not following this path however. Desktop applications aren't hosted by the Windows Store. Instead, the Windows Store more or less hosts a page with the application's information and where to get the goods. I.o.w.: it's nothing more than a product's Facebook page. Microsoft will simply redirect a visitor of the Windows Store to your website and the visitor will then use your site's system to purchase and download the application. This last bit of information is very important. So, this morning I started with fresh energy to register our company 'Solutions Design bv' at the Windows Store and our two applications, LLBLGen Pro and ORM Profiler. First I went to the Windows Store dashboard page. If you don't have an account, you have to log in or sign up if you don't have a live account. I signed in with my live account. After that, it greeted me with a page where I had to fill in a code which was mailed to me. My local mail server polls every several minutes for email so I had to kick it to get it immediately. I grabbed the code from the email and I was presented with a multi-step process to register myself as a company or as an individual. In red I was warned that this choice was permanent and not changeable. I chuckled: Microsoft apparently stores its data on paper, not in digital form. I chose 'company' and was presented with a lengthy form to fill out. On the form there were two strange remarks: Per company there can just be 1 (one, uno, not zero, not two or more) registered developer, and only that developer is able to upload stuff to the store. I have no idea how this works with large companies, oh the overhead nightmares... "Sorry, but John, our registered developer with the Windows Store is on holiday for 3 months, backpacking through Australia, no, he's not reachable at this point. M'yeah, sorry bud. Hey, did you fill in those TPS reports yesterday?" A separate Approver has to be specified, which has to be a different person than the registered developer. Apparently to Microsoft a company with just 1 person is not a company. Luckily we're with two people! *pfew*, dodged that one, otherwise I would be stuck forever: the choice I already made was not reversible! After I had filled out the form and it was all well and good and accepted by the Microsoft lackey who had to write it all down in some paper notebook ("Hey, be warned! It's a permanent choice! Written down in ink, can't be changed!"), I was presented with the question how I wanted to pay for all this. "Pay for what?" I wondered. Must be the paper they were scribbling the information on, I concluded. After all, there's a financial crisis going on! How could I forget! Silly me. "Ok fair enough". The price was 75 Euros, not the end of the world. I could only pay by credit card, so it was accepted quickly. Or so I thought. You see, Microsoft has a different idea about CC payments. In the normal world, you type in your CC number, some date, a name and a security code and that's it. But Microsoft wants to verify this even more. They want to make a verification purchase of a very small amount and are doing that with a special code in the description. You then have to type in that code in a special form in the Windows Store dashboard and after that you're verified. Of course they'll refund the small amount they pull from your card. Sounds simple, right? Well... no. The problem starts with the fact that I can't see the CC activity on some website: I have a bank issued CC card. I get the CC activity once a month on a piece of paper sent to me. The bank's online website doesn't show them. So it's possible I have to wait for this code till October 12th. One month. "So what, I'm not going to use it anyway, Desktop applications don't use the payment system", I thought. "Haha, you're so naive, dear developer!" Microsoft won't allow you to publish any applications till this verification is done. So no application publishing for a month. Wouldn't it be nice if things were, you know, digital, so things got done instantly? But of course, that lackey who scribbled everything in the Big Windows Store Registration Book isn't that quick. Can't blame him though. He's just doing his job. Now, after the payment was done, I was presented with a page which tells me Microsoft is going to use a third party company called 'Symantec', which will verify my identity again. The page explains to me that this could be done through email or phone and that they'll contact the Approver to verify my identity. "Phone?", I thought... that's a little drastic for a developer account to publish a single page of information about an external hosted software product, isn't it? On Facebook I just added a page, done. And paying you, Microsoft, took less information: you were happy to take my money before my identity was even 'verified' by this 3rd party's minions! "Double standards!", I roared. No-one cared. But it's the thought of getting it off your chest, you know. Luckily for me, everyone at Symantec was asleep when I was registering so they went for the fallback option in case phone calls were not possible: my Approver received an email. Imagine you have to explain the idiot web of security theater I was caught in to someone else who then has to reply a random person over the internet that I indeed was who I said I was. As she's a true sweetheart, she gave me the benefit of the doubt and assured that for now, I was who I said I was. Remember, this is for a desktop application, which is only a link to a website, some pictures and a piece of text. No file hosting, no payment processing, nothing, just a single page. Yeah, I also thought I was crazy. But we're not at the end of this quest yet. I clicked around in the confusing menus of the Windows Store dashboard and found the 'Desktop' section. I get a helpful screen with a warning in red that it can't find any certified 'apps'. True, I'm just getting started, buddy. I see a link: "Check the Windows apps you submitted for certification". Well, I haven't submitted anything, but let's see where it brings me. Oh the thrill of adventure! I click the link and I end up on this site: the hardware/desktop dashboard account registration. "Erm... but I just registered...", I mumbled to no-one in particular. Apparently for desktop registration / verification I have to register again, it tells me. But not only that, the desktop application has to be signed with a certificate. And not just some random el-cheapo certificate you can get at any mall's discount store. No, this certificate is special. It's precious. This certificate, the 'Microsoft Authenticode' Digital Certificate, is the only certificate that's acceptable, and jolly, it can be purchased from VeriSign for the price of only ... $99.-, but be quick, because this is a limited time offer! After that it's, I kid you not, $499.-. 500 dollars for a certificate to sign an executable. But, I do feel special, I got a special price. Only for me! I'm glowing. Not for long though. Here I started to wonder, what the benefit of it all was. I now again had to pay money for a shiny certificate which will add 'Solutions Design bv' to our installer as the publisher instead of 'unknown', while our customers download the file from our website. Not only that, but this was all about a Desktop application, which wasn't hosted by Microsoft. They only link to it. And make no mistake. These prices aren't single payments. Every year these have to be renewed. Like a membership of an exclusive club: you're special and privileged, but only if you cough up the dough. To give you an example how silly this all is: I added LLBLGen Pro and ORM Profiler to the Visual Studio Gallery some time ago. It's the same thing: it's a central place where one can find software which adds to / extends / works with Visual Studio. I could simply create the pages, add the information and they show up inside Visual Studio. No files are hosted at Microsoft, they're downloaded from our website. Exactly the same system. As I have to wait for the CC transcripts to arrive anyway, I can't proceed with publishing in this new shiny store. After the verification is complete I have to wait for verification of my software by Microsoft. Even Desktop applications need to be verified using a long list of rules which are mainly focused on Metro-style applications. Even while they're not hosted by Microsoft. I wonder what they'll find. "Your application wasn't approved. It violates rule 14 X sub D: it provides more value than our own competing framework". While I was writing this post, I tried to check something in the Windows Store Dashboard, to see whether I remembered it correctly. I was presented again with the question, after logging in with my live account, to enter the code that was just mailed to me. Not the previous code, a brand new one. Again I had to kick my mail server to pull the email to proceed. This was it. This 'experience' is so beyond miserable, I'm afraid I have to say goodbye for now to the 'Windows Store'. It's simply not worth my time. Now, about live accounts. You might know this: live accounts are tied to everything you do with Microsoft. So if you have an MSDN subscription, e.g. the one which costs over $5000.-, it's tied to this same live account. But the fun thing is, you can login with your live account to the MSDN subscriptions with just the account id and password. No additional code is mailed to you. While it gives you access to all Microsoft software available, including your licenses. Why the draconian security theater with this Windows Store, while all I want is to publish some desktop applications while on other Microsoft sites it's OK to simply sign in with your live account: no codes needed, no verification and no certificates? Microsoft, one thing you need with this store and that's: apps. Apps, apps, apps, apps, aaaaaaaaapps. Sorry, my bad, got carried away. I just can't stand the word 'app'. This store's shelves have to be filled to the brim with goods. But instead of being welcomed into the store with open arms, I have to fight an uphill battle with an endless list of rules and bullshit to earn the privilege to publish in this shiny store. As if I have to be thrilled to be one of the exclusive club called 'Windows Store Publishers'. As if Microsoft doesn't want it to succeed. Craig Stuntz sent me a link to an old blog post of his regarding code signing and uploading to Microsoft's old mobile store from back in the WinMo5 days: http://blogs.teamb.com/craigstuntz/2006/10/11/28357/. Good read and good background info about how little things changed over the years. I hope this helps Microsoft make things more clearer and smoother and also helps ISVs with their decision whether to go with the Windows Store scheme or ignore it. For now, I don't see the advantage of publishing there, especially not with the nonsense rules Microsoft cooked up. Perhaps it changes in the future, who knows.

    Read the article

  • The Microsoft Ajax Library and Visual Studio Beta 2

    - by Stephen Walther
    Visual Studio 2010 Beta 2 was released this week and one of the first things that I hope you notice is that it no longer contains the latest version of ASP.NET AJAX. What happened? Where did AJAX go? Just like Sting and The Police, just like Phil Collins and Genesis, just like Greg Page and the Wiggles, AJAX has gone out of band! We are starting a solo career. A Name Change First things first. In previous releases, our Ajax framework was named ASP.NET AJAX. We now have changed the name of the framework to the Microsoft Ajax Library. There are two reasons behind this name change. First, the members of the Ajax team got tired of explaining to everyone that our Ajax framework is not tied to the server-side ASP.NET framework. You can use the Microsoft Ajax Library with ASP.NET Web Forms, ASP.NET MVC, PHP, Ruby on RAILS, and even pure HTML applications. Our framework can be used as a client-only framework and having the word ASP.NET in our name was confusing people. Second, it was time to start spelling the word Ajax like everyone else. Notice that the name is the Microsoft Ajax Library and not the Microsoft AJAX library. Originally, Microsoft used upper case AJAX because AJAX originally was an acronym for Asynchronous JavaScript and XML. And, according to Strunk and Wagnell, acronyms should be all uppercase. However, Ajax is one of those words that have migrated from acronym status to “just a word” status. So whenever you hear one of your co-workers talk about ASP.NET AJAX, gently correct your co-worker and say “It is now called the Microsoft Ajax Library.” Why OOB? But why move out-of-band (OOB)? The short answer is that we have had approximately 6 preview releases of the Microsoft Ajax Library over the last year. That’s a lot. We pride ourselves on being agile. Client-side technology evolves quickly. We want to be able to get a preview version of the Microsoft Ajax Library out to our customers, get feedback, and make changes to the library quickly. Shipping the Microsoft Ajax Library out-of-band keeps us agile and enables us to continue to ship new versions of the library even after ASP.NET 4 ships. Showing Love for JavaScript Developers One area in which we have received a lot of feedback is around making the Microsoft Ajax Library easier to use for developers who are comfortable with JavaScript. We also wanted to make it easy for jQuery developers to take advantage of the innovative features of the Microsoft Ajax Library. To achieve these goals, we’ve added the following features to the Microsoft Ajax Library (these features are included in the latest preview release that you can download right now): A simplified imperative syntax – We wanted to make it brain-dead simple to create client-side Ajax controls when writing JavaScript. A client script loader – We wanted the Microsoft Ajax Library to load all of the scripts required by a component or control automatically. jQuery integration – We love the jQuery selector syntax. We wanted to make it easy for jQuery developers to use the Microsoft Ajax Library without changing their programming style. If you are interested in learning about these new features of the Microsoft Ajax Library, I recommend that you read the following blog post by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2009/10/15/announcing-microsoft-ajax-library-preview-6-and-the-microsoft-ajax-minifier.aspx Downloading the Latest Version of the Microsoft Ajax Library Currently, the best place to download the latest version of the Microsoft Ajax Library is directly from the ASP.NET CodePlex project: http://aspnet.codeplex.com/ As I write this, the current version is Preview 6. The next version is coming out at the PDC. Summary I’m really excited about the future of the Microsoft Ajax Library. Moving outside of the ASP.NET framework provides us the flexibility to remain agile and continue to innovate aggressively. The latest preview release of the Microsoft Ajax Library includes several major new features including a client script loader, jQuery integration, and a simplified client control creation syntax.

    Read the article

  • Application Demos in UPK

    - by [email protected]
    Over the years, User Productivity Kit has expanded to include solutions to many project challenges. As of UPK 3.6.1, solutions are provided for pre and post application go-live learning, application testing, system documentation, presentation output, and more. New in UPK 3.6.1 are additional features that can be used effectively for application demo purposes. This can come in handy when you need to do a demo but don't want to show or can't show the live application. Maybe you're doing a presentation for a group of project stakeholders and want to focus on the business workflow implemented by the application rather than the mechanics of using it. Or possibly, you need to show the application but you're disconnected from any network preventing you from running the live application. In any of these cases, a presentation aid that represents the live application is what's needed. Previous versions of the UPK topic player would allow you to do this but would always show those UPK user interface elements that help a user learn the application. When you're presenting the narrative live, the UPK bubbles can be a distraction. UPK 3.6.1 provides some new features that allow you to control whether the bubbles display. There are two ways to hide bubbles in a topic. The first is a topic property that allows you to control bubbles across the entire topic. There are 3 settings for the Show Bubbles topic property. The default setting is Use frame settings which allows you to control whether bubbles display on a frame by frame basis. When you choose Always, the bubbles will always display regardless of the frame setting. The final choice is Never. Choosing Never will hide every bubble in your topic with one setting change. As with Always, choosing Never will ignore the frame setting. The second way to control the bubbles is at the frame level. First ensure that the topic's Show Bubbles property is set to Use frame settings. Navigate to the frame on which you want to turn off the bubble and click the Display bubble for this frame button to turn off the bubble. When you play the topic, the bubble will no longer be displayed. Depending on your needs, you might also use another longstanding UPK feature that allows you to control whether the action area displays on a frame. Just click the Action area on/off button to toggle its display. I've found the frame properties to be useful beyond creating presentation aids. When creating "See It!" only topics for more advanced users, I may hide the bubbles on some of the more straightforward frames. For example, if I have a form where one needs to fill out an address, I may display the first bubble in the sequence and explain what the subsequent steps are doing. I then hide bubbles on the remaining frames which are the more mechanical steps of entering the address. We'd like to hear your thoughts on this new UPK feature. Use the comments below to tell us how you've used it. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • Some thoughts on email hosting for one’s own domain

    - by jamiet
    I have used the same email providers for my own domains for a few years now however I am considering moving over to a new provider. In this email I’ll share my current thoughts and hopefully I’ll get some feedback that might help me to decide on what to do next. What I use today I have three email addresses that I use primarily (I have changed the domains in this blog post as I don’t want to give them away to spammers): [email protected] – My personal account that I give out to family and friends and which I use to register on websites [email protected]  - An account that I use to catch email from the numerous mailing lists that I am on [email protected] – I am a self-employed consultant so this is an account that I hand out to my clients, my accountant, and other work-related organisations Those two domains (jtpersonaldomain.com & jtworkdomain.com) are both managed at http://domains.live.com which is a fantastic service provided by Microsoft that for some perplexing reason they never bother telling anyone about. It offers multiple accounts (I have seven at jtpersonaldomain.com though as already stated I only use two of them) which are accessed via Outlook.com (formerly Hotmail.com) along with usage reporting plus a few other odds and sods that I never use. Best of all though, its totally free. In addition, given that I have got both domains hosted using http://domains.live.com I can link my various accounts together and switch between them at Outlook.com without having to login and logout: N.B. You’ll notice that there are two other accounts listed there in addition to the three I already mentioned. One is my mum’s account which helps me provide IT support/spam filtering services to her and the other is the donation account for AdventureWorks on Azure. I find that linking feature to be very handy indeed. Finally, http://domains.live.com is the epitome of “it just works”. I set up jtworkdomain.com at http://domains.live.com over three years ago and I am pretty certain I haven’t been back there even once to administer it. Proposed changes OK, so if I like http://domains.live.com so much why am I considering changing? Well, I earn my corn in the Microsoft ecosystem and if I’m reading the tea-leaves correctly its looking increasingly likely that the services that I’m going to have to be familiar with in the future are all going to be running on top of and alongside Windows Azure Active Directory and Office 365 respectively. Its clear to me that Microsoft’s are pushing their customers toward cloud services and, like it or lump it, data integration developers like me may have to come along for the ride. I don’t think the day is too far off when we can log into Windows Azure SQL Database (aka SQL Azure), Team Foundation Service, Dynamics etc… using the same credentials that are currently used for Office 365 and over time I would expect those things to get integrated together a lot better – that integration will be based upon a Windows Azure Active Directory identity. This should not come as a surprise, in my opinion Microsoft’s whole enterprise play over the past 15 or 20 years can be neatly surmised as “get people onto Windows Server and Active Directory then upsell from there” – in the not-too-distant-future the only difference is that they’re trying to do it in the cloud. I want to get familiar with these services and hence I am considering moving jtworkdomain.com onto Office 365. I’ll lose the convenience of easily being able to switch to that account at Outlook.com and moreover I’ll have to start paying for it (I think it’ll be about fifty quid a year – not a massive amount but its quite a bit more than free) but increasingly this is beginning to look like a move I have to make. So that’s where my head is at right now. Anyone have any relevant thoughts or experiences to share? Please let me know in the comments below. @Jamiet

    Read the article

  • Can't boot in Ubuntu after windows upgrade

    - by VanceAnce
    After my lastest update for Ubuntu and Windows XP, I got a Grub error on booting the next day. ls lists the following (without () ): sd0 sd1, msdos sd2 sd5 sd6 When I tried to get into one with (sd0,xy)/ it doesn't detect system or unknown file system error. I tried to boot to a live session with a Knoppix live CD and found out that all data exists. I also tried to recover with TestDisk and it finds all systems. Here is the test disk result: Start End Size in sectors 1 * HPFS - NTFS 0 1 1 7079 254 63 113740137 2 E extended LBA 7080 0 1 12161 254 63 81642330 5 L HPFS - NTFS 7080 1 1 10266 254 63 51199092 [Schule] X extended 12031 30 1 12161 254 63 2102625 6 L Linux Swap 12031 31 33 12161 254 63 2102530 I've 1 winxp-home, 1x Ubuntu (ext3+swap) and 1 winxp prof and then I wrote on mbr with TestDisk but I always get the same errors with Grub. What should I do? I need both XP and Ubuntu. Help me please. more infos in answers below - sry for thos confusing style but im working on diff live system and browsers and have to reboot always the boot info script output is also down below maybe an advanced user can correct my fail posting - after i can solve my issuse i will register here thanx and pls help me with those weired issues ! as i still cant just comment my own answer or those on top i again has to put it here as a sepperate answer..... (or even edit - maybe an browser failur using the live cds ... cause this posti can edit) here the bootinfo script output - but the result is the same as with TestDisk ... but it looks worse - cause it also doesnt detect my old ubuntu ... but there wasnt a eares process or overwrite process visibile ending the last working session output: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== = Syslinux MBR (4.04 and higher) is installed in the MBR of /dev/sda. sda1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows XP Boot files: /boot.ini /ntldr /NTDETECT.COM sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 63. Operating System: Windows XP Boot files: sda6: __________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 100.0 GB, 100030242816 bytes 255 heads, 63 sectors/track, 12161 cylinders, total 195371568 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 113,740,199 113,740,137 7 NTFS / exFAT / HPFS /dev/sda2 113,740,200 195,382,529 81,642,330 f W95 Extended (LBA) /dev/sda5 113,740,263 164,939,354 51,199,092 7 NTFS / exFAT / HPFS /dev/sda6 193,280,000 195,382,529 2,102,530 82 Linux swap / Solaris /dev/sda2 ends after the last sector of /dev/sda /dev/sda6 ends after the last sector of /dev/sda "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 6596D86768011128 ntfs /dev/sda5 1300D3B7744EC141 ntfs Schule /dev/sda6 5b95f2a1-4145-43a5-ac51-41d7dd32b213 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sr0 /cdrom iso9660 (ro,noatime) ================================ sda1/boot.ini: ================================ [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Home Edition" /fastdetect /NoExecute=OptOut multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Professional" /fastdetect [spybotsd] timeout.old=30 the last part shows that i now use the windows boot loader so that i can acces at least one OS but shouldnt i also get acces to my ubuntu partitions with live-linux-cds ? or do i have to boot with grub to get to those files only ??

    Read the article

  • Pro/con of using Angular directives for complex form validation/ GUI manipulation

    - by tengen
    I am building a new SPA front end to replace an existing enterprise's legacy hodgepodge of systems that are outdated and in need of updating. I am new to angular, and wanted to see if the community could give me some perspective. I'll state my problem, and then ask my question. I have to generate several series of check boxes based on data from a .js include, with data like this: $scope.fieldMappings.investmentObjectiveMap = [ {'id':"CAPITAL PRESERVATION", 'name':"Capital Preservation"}, {'id':"STABLE", 'name':"Moderate"}, {'id':"BALANCED", 'name':"Moderate Growth"}, // etc {'id':"NONE", 'name':"None"} ]; The checkboxes are created using an ng-repeat, like this: <div ng-repeat="investmentObjective in fieldMappings.investmentObjectiveMap"> ... </div> However, I needed the values represented by the checkboxes to map to a different model (not just 2-way-bound to the fieldmappings object). To accomplish this, I created a directive, which accepts a destination array destarray which is eventually mapped to the model. I also know I need to handle some very specific gui controls, such as unchecking "None" if anything else gets checked, or checking "None" if everything else gets unchecked. Also, "None" won't be an option in every group of checkboxes, so the directive needs to be generic enough to accept a validation function that can fiddle with the checked state of the checkbox group's inputs based on what's already clicked, but smart enough not to break if there is no option called "NONE". I started to do that by adding an ng-click which invoked a function in the controller, but in looking around Stack Overflow, I read people saying that its bad to put DOM manipulation code inside your controller - it should go in directives. So do I need another directive? So far: (html): <input my-checkbox-group type="checkbox" fieldobj="investmentObjective" ng-click="validationfunc()" validationfunc="clearOnNone()" destarray="investor.investmentObjective" /> Directive code: .directive("myCheckboxGroup", function () { return { restrict: "A", scope: { destarray: "=", // the source of all the checkbox values fieldobj: "=", // the array the values came from validationfunc: "&" // the function to be called for validation (optional) }, link: function (scope, elem, attrs) { if (scope.destarray.indexOf(scope.fieldobj.id) !== -1) { elem[0].checked = true; } elem.bind('click', function () { var index = scope.destarray.indexOf(scope.fieldobj.id); if (elem[0].checked) { if (index === -1) { scope.destarray.push(scope.fieldobj.id); } } else { if (index !== -1) { scope.destarray.splice(index, 1); } } }); } }; }) .js controller snippet: .controller( 'SuitabilityCtrl', ['$scope', function ( $scope ) { $scope.clearOnNone = function() { // naughty jQuery DOM manipulation code that // looks at checkboxes and checks/unchecks as needed }; The above code is done and works fine, except the naughty jquery code in clearOnNone(), which is why I wrote this question. And here is my question: after ALL this, I think to myself - I could be done already if I just manually handled all this GUI logic and validation junk with jQuery written in my controller. At what point does it become foolish to write these complicated directives that future developers will have to puzzle over more than if I had just written jQuery code that 99% of us would understand with a glance? How do other developers draw the line? I see this all over Stack Overflow. For example, this question seems like it could be answered with a dozen lines of straightforward jQuery, yet he has opted to do it the angular way, with a directive and a partial... it seems like a lot of work for a simple problem. Specifically, I suppose I would like to know: how SHOULD I be writing the code that checks whether "None" has been selected (if it exists as an option in this group of checkboxes), and then check/uncheck the other boxes accordingly? A more complex directive? I can't believe I'm the only developer that is having to implement code that is more complex than needed just to satisfy an opinionated framework.

    Read the article

  • Visual Studio 2010 and javascript debugging in external javascript files (embedded and minified).

    - by OKB
    Hi, The asp.net web application I'm working on is written in asp.net 3.5, the web app solution is upgraded from VS 2008 (don't know if that matter). The solution had javascript in the aspx files before I moved the javascript to external files. Now what I have done is to set all the javascript files to be embedded resource (except the jquery.js file) and I want to minify them when building for release by using the MS Ajax Minifier. I want to use the minified javascript files when I'm in the RELEASE mode and when I'm in DEBUG mode I want to use the "normal" versions. My problem now is that I'm unable to debug the javascript code in debug mode. When I set a break point a javascript function, VS is not breaking at all when the function is executed. I have added this entry in my web.config: <system.web> <compilation defaultLanguage="c#" debug="true" /> </system.web> Here how I register the jquery in an aspx-file: <asp:ScriptManagerProxy ID="ScriptManagerProxy1" runat="server"> <Scripts> <asp:ScriptReference Path="~/Javascript/jquery.js"/> </Scripts> </asp:ScriptManagerProxy> External javascript registration in the code-behind: #if DEBUG [assembly: WebResource("braArkivWeb.Javascript.jquery.js", "text/javascript")] [assembly: WebResource(braArkivWeb.ArkivdelSearch.JavaScriptResource, "text/javascript")] #else [assembly: WebResource("braArkivWeb.Javascript.jquery.min.js", "text/javascript")] [assembly: WebResource(braArkivWeb.ArkivdelSearch.JavaScriptMinResource, "text/javascript")] #endif public partial class ArkivdelSearch : Page { public const string JavaScriptResource = "braArkivWeb.ArkivdelSearch.js"; public const string JavaScriptMinResource = "braArkivWeb.ArkivdelSearch.min.js"; protected void Page_Init(object sender, EventArgs e) { InitPageClientScript(); } private void InitPageClientScript() { #if DEBUG this.Page.ClientScript.RegisterClientScriptResource(typeof(ArkivdelSearch), "braArkivWeb.Javascript.jquery.js"); this.Page.ClientScript.RegisterClientScriptResource(typeof(ArkivdelSearch), JavaScriptResource); #else this.Page.ClientScript.RegisterClientScriptResource(typeof(ArkivdelSearch), "braArkivWeb.Javascript.jquery.min.js"); this.Page.ClientScript.RegisterClientScriptResource(typeof(ArkivdelSearch), JavaScriptMinResource); #endif StringBuilder sb = new StringBuilder(); Page.ClientScript.RegisterStartupScript(typeof(ArkivdelSearch), "initArkivdelSearch", sb.ToString(), true); } } In the project file I have added this code to minify the javascripts: <!-- Minify all JavaScript files that were embedded as resources --> <UsingTask TaskName="AjaxMin" AssemblyFile="$(MSBuildProjectDirectory)\..\..\SharedLib\AjaxMinTask.dll" /> <PropertyGroup> <ResGenDependsOn> MinifyJavaScript; $(ResGenDependsOn) </ResGenDependsOn> </PropertyGroup> <Target Name="MinifyJavaScript" Condition=" '$(ConfigurationName)'=='Release' "> <Copy SourceFiles="@(EmbeddedResource)" DestinationFolder="$(IntermediateOutputPath)" Condition="'%(Extension)'=='.js'"> <Output TaskParameter="DestinationFiles" ItemName="EmbeddedJavaScriptResource" /> </Copy> <AjaxMin JsSourceFiles="@(EmbeddedJavaScriptResource)" JsSourceExtensionPattern="\.js$" JsTargetExtension=".js" /> <ItemGroup> <EmbeddedResource Remove="@(EmbeddedResource)" Condition="'%(Extension)'=='.js'" /> <EmbeddedResource Include="@(EmbeddedJavaScriptResource)" /> <FileWrites Include="@(EmbeddedJavaScriptResource)" /> </ItemGroup> </Target> Do you see what I'm doing wrong? Or what I'm missing in order to be able to debug my javascript code? Best Regards, OKB

    Read the article

  • Reading a POP3 server with only TcpClient and StreamWriter/StreamReader[SOLVED]

    - by WebDevHobo
    I'm trying to read mails from my live.com account, via the POP3 protocol. I've found the the server is pop3.live.com and the port if 995. I'm not planning on using a pre-made library, I'm using NetworkStream and StreamReader/StreamWriter for the job. I need to figure this out. So, any of the answers given here: http://stackoverflow.com/questions/44383/reading-email-using-pop3-in-c are not usefull. It's part of a larger program, but I made a small test to see if it works. Eitherway, i'm not getting anything. Here's the code I'm using, which I think should be correct. EDIT: this code is old, please refer to the second block problem solved. public Program() { string temp = ""; using(TcpClient tc = new TcpClient(new IPEndPoint(IPAddress.Parse("127.0.0.1"),8000))) { tc.Connect("pop3.live.com",995); using(NetworkStream nws = tc.GetStream()) { using(StreamReader sr = new StreamReader(nws)) { using(StreamWriter sw = new StreamWriter(nws)) { sw.WriteLine("USER " + user); sw.Flush(); sw.WriteLine("PASS " + pass); sw.Flush(); sw.WriteLine("LIST"); sw.Flush(); while(temp != ".") { temp += sr.ReadLine(); } } } } } Console.WriteLine(temp); } Visual Studio debugger constantly falls over tc.Connect("pop3.live.com",995); Which throws an "A socket operation was attempted to an unreachable network 65.55.172.253:995" error. So, I'm sending from port 8000 on my machine to port 995, the hotmail pop3 port. And I'm getting nothing, and I'm out of ideas. Second block: Problem was apparently that I didn't write the quit command. The Code: public Program() { string str = string.Empty; string strTemp = string.Empty; using(TcpClient tc = new TcpClient()) { tc.Connect("pop3.live.com",995); using(SslStream sl = new SslStream(tc.GetStream())) { sl.AuthenticateAsClient("pop3.live.com"); using(StreamReader sr = new StreamReader(sl)) { using(StreamWriter sw = new StreamWriter(sl)) { sw.WriteLine("USER " + user); sw.Flush(); sw.WriteLine("PASS " + pass); sw.Flush(); sw.WriteLine("LIST"); sw.Flush(); sw.WriteLine("QUIT "); sw.Flush(); while((strTemp = sr.ReadLine()) != null) { if(strTemp == "." || strTemp.IndexOf("-ERR") != -1) { break; } str += strTemp; } } } } } Console.WriteLine(str); }

    Read the article

  • How to handle drag events on iphone and ipad with javascript/jquery?

    - by fmsf
    Hey, I have a little app that has been under development for some time. My friends and I have been working really hard on this and are near release of the beta version. I want to give some demos using iPhone and iPad to look cool :p Now my problem is how to handle: Mouse Down Mouse Up Mouse Leave The multitouch interface of the iPhone (which I expect is similar to the iPad) handles mouse move on a browser has a scrolling event. One could try to capture the scrolling event and use it to simulate the dragging but I don't even know if it will be doable or if it will only be a hack. Any one knows of a more robust manner to manage dragging events on the iphone/ipad?

    Read the article

  • Binding data to subgrid

    - by bhargav
    i have a jqgrid with a subgrid...the databinding is done in javascript like this <script language="javascript" type="text/javascript"> var x = screen.width; $(document).ready(function () { $("#projgrid").jqGrid({ mtype: 'POST', datatype: function (pdata) { getData(pdata); }, colNames: ['Project ID', 'Due Date', 'Project Name', 'SalesRep', 'Organization:', 'Status', 'Active Value', 'Delete'], colModel: [ { name: 'Project ID', index: 'project_id', width: 12, align: 'left', key: true }, { name: 'Due Date', index: 'project_date_display', width: 15, align: 'left' }, { name: 'Project Name', index: 'project_title', width: 60, align: 'left' }, { name: 'SalesRep', index: 'Salesrep', width: 22, align: 'left' }, { name: 'Organization:', index: 'customer_company_name:', width: 56, align: 'left' }, { name: 'Status', index: 'Status', align: 'left', width: 15 }, { name: 'Active Value', index: 'Active Value', align: 'left', width: 10 }, { name: 'Delete', index: 'Delete', align: 'left', width: 10}], pager: '#proj_pager', rowList: [10, 20, 50], sortname: 'project_id', sortorder: 'asc', rowNum: 10, loadtext: "Loading....", subGrid: true, shrinkToFit: true, emptyrecords: "No records to view", width: x - 100, height: "100%", rownumbers: true, caption: 'Projects', subGridRowExpanded: function (subgrid_id, row_id) { var subgrid_table_id, pager_id; subgrid_table_id = subgrid_id + "_t"; pager_id = "p_" + subgrid_table_id; $("#" + subgrid_id).html("<table id='" + subgrid_table_id + "' class='scroll'></table><div id='" + pager_id + "' class='scroll'></div>"); jQuery("#" + subgrid_table_id).jqGrid({ mtype: 'POST', postData: { entityIndex: function () { return row_id } }, datatype: function (pdata) { getactionData(pdata); }, height: "100%", colNames: ['Event ID', 'Priority', 'Deadline', 'From Date', 'Title', 'Status', 'Hours', 'Contact From', 'Contact To'], colModel: [ { name: 'Event ID', index: 'Event ID' }, { name: 'Priority', index: 'IssueCode' }, { name: 'Deadline', index: 'IssueTitle' }, { name: 'From Date', index: 'From Date' }, { name: 'Title', index: 'Title' }, { name: 'Status', index: 'Status' }, { name: 'Hours', index: 'Hours' }, { name: 'Contact From', index: 'Contact From' }, { name: 'Contact To', index: 'Contact To' } ], caption: "Action Details", rowNum: 10, pager: '#actionpager', rowList: [10, 20, 30, 50], sortname: 'Event ID', sortorder: "desc", loadtext: "Loading....", shrinkToFit: true, emptyrecords: "No records to view", rownumbers: true, ondblClickRow: function (rowid) { } }); jQuery("#actiongrid").jqGrid('navGrid', '#actionpager', { edit: false, add: false, del: false, search: false }); } }); jQuery("#projgrid").jqGrid('navGrid', '#proj_pager', { edit: false, add: false, del: false, excel: true, search: false }); }); function getactionData(pdata) { var project_id = pdata.entityIndex(); var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var checkOnlyContact = document.getElementById('ctl00_ContentPlaceHolder2_chkOnlyContact').checked; var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pdata.rows; var npage = pdata.page; var sortindex = pdata.sidx; var sortdir = pdata.sord; var path = "project_brow.aspx/GetActionDetails" $.ajax({ type: "POST", url: path, data: "{'project_id': '" + project_id + "','ChannelContact': '" + ChannelContact + "','HideCompleted': '" + HideCompleted + "','Scm': '" + Scm + "','checkOnlyContact': '" + checkOnlyContact + "','MerchantId': '" + MerchantId + "','nrows': '" + nrows + "','npage': '" + npage + "','sortindex': '" + sortindex + "','sortdir': '" + sortdir + "'}", contentType: "application/json; charset=utf-8", success: function (data, textStatus) { if (textStatus == "success") obj = jQuery.parseJSON(data.d) ReceivedData(obj); }, error: function (data, textStatus) { alert('An error has occured retrieving data!'); } }); } function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } function getData(pData) { var dtDateFrom = document.getElementById('ctl00_ContentPlaceHolder2_dtDateFrom_textBox').value; var dtDateTo = document.getElementById('ctl00_ContentPlaceHolder2_dtDateTo_textBox').value; var Status = document.getElementById('ctl00_ContentPlaceHolder2_ddlStatus').value; var Type = document.getElementById('ctl00_ContentPlaceHolder2_ddlType').value; var Channel = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannel').value; var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var Customers = document.getElementById('ctl00_ContentPlaceHolder2_txtCustomers').value; var KeywordSearch = document.getElementById('ctl00_ContentPlaceHolder2_txtKeywordSearch').value; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var SelectedCustomerId = document.getElementById("<%=hdnSelectedCustomerId.ClientID %>").value var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pData.rows; var npage = pData.page; var sortindex = pData.sidx; var sortdir = pData.sord; PageMethods.GetProjectDetails(SelectedCustomerId, Customers, KeywordSearch, MerchantId, Channel, Status, Type, dtDateTo, dtDateFrom, ChannelContact, HideCompleted, Scm, nrows, npage, sortindex, sortdir, AjaxSucceeded, AjaxFailed); } function AjaxSucceeded(data) { var obj = jQuery.parseJSON(data) if (obj != null) { if (obj.records!="") { ReceivedClientData(obj); } else { alert('No Data Available to Display') } } } function AjaxFailed(data) { alert('An error has occured retrieving data!'); } function ReceivedClientData(data) { var thegrid = jQuery("#projgrid")[0]; thegrid.addJSONData(data); } </script> as u can see projgrid is my parent grid and action grid is my subgrid to be shown onclicking the '+' symbol Projgrid is binded and being displayed but when it comes to subgrid im able to get the data but the problem comes at the time of binding data to subgrid which is done in function named ReceivedData where you can see like this function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } "data" is what i wanted exactly but it cannot be binded to actiongrid which is the subgrid Thanx in advance for help

    Read the article

  • jqGrid - edit function never getting called

    - by dcp
    I'm having a problem with JQGrid using ASP.NET MVC. I'm trying to follow this example (after you go to that link, over on the left side of the page please click Live Data Manipulation, then Edit Row), but my edit function is never getting called (i.e. it's never getting into the $("#bedata").click(function(). Does anyone know what could be the problem? <script type="text/javascript"> var lastsel2; jQuery(document).ready(function() { jQuery("#editgrid").jqGrid({ url: '/Home/GetMovieData/', datatype: 'json', mtype: 'GET', colNames: ['id', 'Movie Name', 'Directed By', 'Release Date', 'IMDB Rating', 'Plot', 'ImageURL'], colModel: [ { name: 'id', index: 'Id', width: 55, sortable: false, hidden: true, editable: false, editoptions: { readonly: true, size: 10} }, { name: 'Movie Name', index: 'Name', width: 250, editable: true, editoptions: { size: 10} }, { name: 'Directed By', index: 'Director', width: 250, align: 'right', editable: true, editoptions: { size: 10} }, { name: 'Release Date', index: 'ReleaseDate', width: 100, align: 'right', editable: true, editoptions: { size: 10} }, { name: 'IMDB Rating', index: 'IMDBUserRating', width: 100, align: 'right', editable: true, editoptions: { size: 10} }, { name: 'Plot', index: 'Plot', width: 150, hidden: false, editable: true, editoptions: { size: 30} }, { name: 'ImageURL', index: 'ImageURL', width: 55, hidden: true, editable: false, editoptions: { readonly: true, size: 10} } ], pager: jQuery('#pager'), rowNum: 5, rowList: [5, 10, 20], sortname: 'id', sortorder: "desc", height: '100%', width: '100%', viewrecords: true, imgpath: '/Content/jqGridCss/redmond/images', caption: 'Movies from 2008', editurl: '/Home/EditMovieData/', caption: 'Movie List' }); }); $("#bedata").click(function() { var gr = jQuery("#editgrid").jqGrid('getGridParam', 'selrow'); if (gr != null) jQuery("#editgrid").jqGrid('editGridRow', gr, { height: 280, reloadAfterSubmit: false }); else alert("Hey dork, please select a row"); }); </script> The relevant HTML is here: <table id="editgrid"> </table> <div id="pager" style="text-align: center;"> </div> <input type="button" id="bedata" value="Edit Selected" />

    Read the article

  • twitter bootstrap typeahead 2.0.4 ajax error

    - by Adam Levitt
    I have the following code which definitely returns a proper data result if I use the 2.0.0 version, but for some reason bootstrap's typeahead plugin is giving me an error. I pasted it below the code sample: <input id="test" type="text" /> $('#test').typeahead({ source: function(typeahead, query) { return $.post('/Profile/searchFor', { query: query }, function(data) { return typeahead.process(data); }); } }); When I run this example I'm getting a jQuery bug that says the following: ** item is undefined matcher(item=undefined)bootst...head.js (line 104) <br/>(?)(item=undefined)bootst...head.js (line 91) <br/>f(a=function(), b=function(), c=false)jquery....min.js (line 2) <br/>lookup(event=undefined)bootst...head.js (line 90) <br/>keyup(e=Object { originalEvent=Event keyup, type="keyup", timeStamp=145718131, more...})bootst...head.js (line 198) <br/>f()jquery....min.js (line 2) <br/>add(c=Object { originalEvent=Event keyup, type="keyup", timeStamp=145718131, <br/>more...})jquery....min.js (line 3) <br/>add(a=keyup charCode=0, keyCode=70)jquery....min.js (line 3) <br/>[Break On This Error] <br/>return item.toLowerCase().indexOf(this.query.toLowerCase())** Any thoughts? ... The same code works in the 2.0.0 version of the plugin, but fails to write to my knockout object model.

    Read the article

  • ext error with zend framework mvc

    - by terrani
    Hi, I am trying to setup ext javascript grid within zend framework mvc. I included ext css and js using the following code. $this->headScript() ->appendFile('/Resource/scripts/ext/jquery-1.4.2.js') ->appendFile('/Resource/scripts/ext/jquery/ext-jquery-adapter.js') ->appendFile('/Resource/scripts/ext/jquery/ext-all.js'); $this->headLink() ->appendStylesheet('/Layouts/admin/css/content.css') ->appendStylesheet('/Layouts/admin/css/ui.css') ->appendStylesheet('/Layouts/admin/css/button.css') ->appendStylesheet('/Layouts/admin/css/moon.css') ->appendStylesheet('/Resource/scripts/ext/css/ext-all.css'); when I run the code, I get the following error message from firefox. syntax error [Break on this error] \n What should I do to fix this?

    Read the article

  • How to set Options in jqgrid?

    - by Ankita
    I need to set the Options for Jqgrid like toppager, forceFit for which the "Can be changed?" value is set to "No" hence i tired it to set by adding it this way jQuery(document).ready(function() { jQuery("#list").setGridParam({ forceFit: true, toppager: true }).trigger("reloadGrid"); jQuery("#list").jqGrid({ url: '<%= Url.Action("GridData") %', datatype: 'json', mtype: 'GET', colNames:['Time', 'Description', 'Category', 'Type', 'Originator', 'Vessel'], colModel: [ { name: 'Time', index: 'Time', width: 200, align: 'left' }, { name: 'Description', index: 'Description', width: 600, align: 'left' }, { name: 'Category', index: 'Category', width: 100, align: 'left' }, { name: 'Type', index: 'Type', width: 100, align: 'left' }, { name: 'Originator', index: 'Originator', width: 100, align: 'left' }, { name: 'Vessel', index: 'Vessel', align: 'left'}], pager: jQuery('#pager'), rowNum: 20, rowList: [10, 20, 50], sortname: 'Time', sortorder: "desc", viewrecords: true, hoverrows: false, gridview: true, emptyrecords: 'No data for the applied filter', height: 460, caption: 'Logbook Grid', //forceFit: true, width: 1200 }); }); But it didnt work Can u pls let me know what exactly i am doing wrong or the right way for this ?

    Read the article

  • function (blurClass) NOT WORKING IN IE

    - by Erik
    I can't get this plugin to function properly in IE.... Check out my homepage and look at the huge search field toward the top... www.naturalskin.com Whenever I refresh the screen the "blur" looses its function and I'm stuck with text..... Here is the script that I place in an external js page: http://www.naturalskin.com/src/js/javascript/batches.js jQuery.fn.hint = function (blurClass) { if (!blurClass) { blurClass = 'blur'; } return this.each(function () { // get jQuery version of 'this' var $input = jQuery(this), // capture the rest of the variable to allow for reuse title = $input.attr('title'), $form = jQuery(this.form), $win = jQuery(window); function remove() { if ($input.val() === title && $input.hasClass(blurClass)) { $input.val('').removeClass(blurClass); } } // only apply logic if the element has the attribute if (title) { // on blur, set value to title attr if text is blank $input.blur(function () { if (this.value === '') { $input.val(title).addClass(blurClass); } }).focus(remove).blur(); // now change all inputs to title // clear the pre-defined text when form is submitted $form.submit(remove); $win.unload(remove); // handles Firefox's autocomplete } }); }; Erik

    Read the article

  • How do I change JAVASCRIPT_DEFAULT_SOURCES for my application?

    - by Adam Lassek
    When you call javascript_include_tag :defaults you usually get: prototype.js, effects.js, dragdrop.js, and controls.js. These are stored in a constant in ActionView::Helpers::AssetTagHelper called 'JAVASCRIPT_DEFAULT_SOURCES`. My application uses jQuery, so I want to replace the Prototype references with something more useful. I added an initializer with these lines, based on the source code from jRails: ActionView::Helpers::AssetTagHelper::JAVASCRIPT_DEFAULT_SOURCES = %w{ jquery-1.4.min jquery-ui jquery.cookie } ActionView::Helpers::AssetTagHelper::reset_javascript_include_default But when I do this, I get: warning: already initialized constant JAVASCRIPT_DEFAULT_SOURCES during startup. What's the correct way of changing this value? In the source code it checks for the constant before setting it, but apparently that happens before it runs the initializer scripts. The Rails 3.0 release will provide much greater flexibility with choice of JS libraries, so I guess this is a problem with an expiration date.

    Read the article

< Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >