Search Results

Search found 4685 results on 188 pages for 'proper'.

Page 138/188 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Need help partitioning when reinstalling Ubuntu 14.04

    - by Chris M.
    I upgraded to 14.04 about a month ago on my HP Mini netbook (about 16 GB hard disk). A few days ago the system crashed (I don't know why but I was using internet at the time). When I restarted the computer, Ubuntu would not load. Instead, I got a message from the BIOS saying Reboot and Select proper Boot device or Insert Boot Media in selected Boot device and press a key I took this to mean that I needed to reinstall 14.04. When I try to reinstall Ubuntu from the USB stick, I choose "Erase disk and install Ubuntu" but then I get a message: Some of the partitions you created are too small. Please make the following partitions at least this large: / 3.3 GB If you do not go back to the partitioner and increase the size of these partitions, the installation may fail. At first I hit Continue to see if it would install anyway, and it gave the message: The attempt to mount a file system with type ext4 in SCSI1 (0,0,0), partition # 1 (sda) at / failed. You may resume partitioning from the partitioning menu. The second time I hit Go Back, and it took me to the following partitioning table: Device Type Mount Point Format Size Used System /dev/sda /dev/sda1 ext4 (checked) 3228 MB Unknown /dev/sda5 swap (not checked) 1063 MB Unknown + - Change New Partition Table... Revert Device for boot loader installation: /dev/sda ATA JM Loader 001 (4.3 GB) At this point I'm not sure what to do. I've never partitioned my hard drive before and I don't want to screw things up. (I'm not particularly tech savvy.) Can you instruct me what I should do. (P.S. I'm afraid the table might not appear as I typed it in.) Results from fdisk: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 4294 MB, 4294967296 bytes 255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 7860 MB, 7860125696 bytes 155 heads, 31 sectors/track, 3194 cylinders, total 15351808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009a565 Device Boot Start End Blocks Id System /dev/sdb1 * 2768 15351807 7674520 b W95 FAT32 ubuntu@ubuntu:~$ Here is what it displays when I open the Disks utility (I tried the screenshot terminal command you suggested but it didn't seem to do anything): 4.3 GB Hard Disk /dev/sda Model: JM Loader 001 (01000001) Size: 4.3 GB (4,294,967,296 bytes) Serial Number: 01234123412341234 Assessment: SMART is not supported Volumes Size: 4.3 GB (4,294,967,296 bytes) Device: /dev/sda Contents: Unknown (There is a button in the utility that when you click it gives the following options: Format... Create Disk Image... Restore Disk Image... Benchmark but SMART Data & Self-Tests... is dimmed out) When I hit F9 Change Boot Device Order, it shows the hard drive as: SATA:PM-JM Loader 001 When I hit F10 to get me into the BIOS Setup Utility, under Diagnostic it shows: Primary Hard Disk Self Test Not Support NetworkManager Tool State: disconnected Device: eth0 Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 00:26:55:B0:7F:0C Capabilities: Carrier Detect: yes Wired Properties Carrier: off When I run command lshw -C network, I get: WARNING: you should run this program as super-user. *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:01:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:16 memory:feafc000-feafffff *-network description: Ethernet interface product: AR8132 Fast Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: c0 serial: 00:26:55:b0:7f:0c capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.1-NAPI latency=0 link=no multicast=yes port=twisted pair resources: irq:43 memory:febc0000-febfffff ioport:ec80(size=128) WARNING: output may be incomplete or inaccurate, you should run this program as super-user.

    Read the article

  • Replicating between Cloud and On-Premises using Oracle GoldenGate

    - by Ananth R. Tiru
    Do you have applications running on the cloud that you need to connect with the on premises systems. The most likely answer to this question is an astounding YES!  If so, then you understand the importance of keep the data fresh at all times across the cloud and on-premises environments. This is also one of the key focus areas for the new GoldenGate 12c release which we announced couple of week ago via a press release. Most enterprises have spent years avoiding the data “silos” that inhibit productivity. For example, an enterprise which has adopted a CRM strategy could be relying on an on-premises based marketing application used for developing and nurturing leads. At the same time it could be using a SaaS based Sales application to create opportunities and quotes. The sales and the marketing teams which use these systems need to be able to access and share the data in a reliable and cohesive way. This example can be extended to other applications areas such as HR, Supply Chain, and Finance and the demands the users place on getting a consistent view of the data. When it comes to moving data in hybrid environments some of the key requirements include minimal latency, reliability and security: Data must remain fresh. As data ages it becomes less relevant and less valuable—day-old data is often insufficient in today’s competitive landscape. Reliability must be guaranteed despite system or connectivity issues that can occur between the cloud and on-premises instances. Security is a key concern when replicating between cloud and on-premises instances. There are several options to consider when replicating between the cloud and on-premises instances. Option 1 – Secured network established between the cloud and on-premises A secured network is established between the cloud and on-premises which enables the applications (including replication software) running on the cloud and on-premises to have seamless connectivity to other applications irrespective of where they are physically located. Option 2 – Restricted network established between the cloud and on-premises A restricted network is established between the cloud and on-premises instances which enable certain ports (required by replication) be opened on both the cloud and on the on-premises instances and white lists the IP addresses of the cloud and on-premises instances. Option 3 – Restricted network access from on-premises and cloud through HTTP proxy This option can be considered when the ports required by the applications (including replication software) are not open and the cloud instance is not white listed on the on-premises instance. This option of tunneling through HTTP proxy may be only considered when proper security exceptions are obtained. Oracle GoldenGate Oracle GoldenGate is used for major Fortune 500 companies and other industry leaders worldwide to support mission-critical systems for data availability and integration. Oracle GoldenGate addresses the requirements for ensuring data consistency between cloud and on-premises instances, thus facilitating the business process to run effectively and reliably. The architecture diagram below illustrates the scenario where the cloud and the on-premises instance are connected using GoldenGate through a secured network In the above scenario, Oracle GoldenGate is installed and configured on both the cloud and the on-premises instances. On the cloud instance Oracle GoldenGate is installed and configured on the machine where the database instance can be accessed. Oracle GoldenGate can be configured for unidirectional or bi-directional replication between the cloud and on premises instances. The specific configuration details of Oracle GoldenGate processes will depend upon the option selected for establishing connectivity between the cloud and on-premises instances. The knowledge article (ID - 1588484.1) titled ' Replicating between Cloud and On-Premises using Oracle GoldenGate' discusses in detail the options for replicating between the cloud and on-premises instances. The article can be found on My Oracle Support. To learn more about Oracle GoldenGate 12c register for our launch webcast where we will go into these new features in more detail.   You may also want to download our white paper "Oracle GoldenGate 12c Release 1 New Features Overview" I would love to hear your requirements for replicating between on-premises and cloud instances, as well as your comments about the strategy discussed in the knowledge article to address your needs. Please post your comments in this blog or in the Oracle GoldenGate public forum - https://forums.oracle.com/community/developer/english/business_intelligence/system_management_and_integration/goldengate

    Read the article

  • Running Powershell from within SharePoint

    - by Norgean
    Just because something is a daft idea, doesn't mean it can't be done. We sometimes need to do some housekeeping - like delete old files or list items or… yes, well, whatever you use Powershell for in a SharePoint world. Or it could be that your solution has "issues" for which you have Powershell solutions, but not the budget to transform into proper bug fixes. So you create a "how to" for the ITPro guys. Idea: What if we keep the scripts in a list, and have SharePoint execute the scripts on demand? An announcements list (because of the multiline body field). Warning! Let us be clear. This list needs to be locked down; if somebody creates a malicious script and you run it, I cannot help you. First; we need to figure out how to start Powershell scripts from C#. Hit teh interwebs and the Googlie, and you may find jpmik's post: http://www.codeproject.com/Articles/18229/How-to-run-PowerShell-scripts-from-C. (Or MS' official answer at http://msdn.microsoft.com/en-us/library/ee706563(v=vs.85).aspx) public string RunPowershell(string powershellText, SPWeb web, string param1, string param2) { // Powershell ~= RunspaceFactory - i.e. Create a powershell context var runspace = RunspaceFactory.CreateRunspace(); var resultString = new StringBuilder(); try { // load the SharePoint snapin - Note: you cannot do this in the script itself (i.e. add-pssnapin etc does not work) PSSnapInException snapInError; runspace.RunspaceConfiguration.AddPSSnapIn("Microsoft.SharePoint.PowerShell", out snapInError); runspace.Open(); // set a web variable. runspace.SessionStateProxy.SetVariable("webContext", web); // and some user defined parameters runspace.SessionStateProxy.SetVariable("param1", param1); runspace.SessionStateProxy.SetVariable("param2", param2); var pipeline = runspace.CreatePipeline(); pipeline.Commands.AddScript(powershellText); // add a "return" variable pipeline.Commands.Add("Out-String"); // execute! var results = pipeline.Invoke(); // convert the script result into a single string foreach (PSObject obj in results) { resultString.AppendLine(obj.ToString()); } } finally { // close the runspace runspace.Close(); } // consider logging the result. Or something. return resultString.ToString(); } Ok. We've written some code. Let us test it. var runner = new PowershellRunner(); runner.RunPowershellScript(@" $web = Get-SPWeb 'http://server/web' # or $webContext $web.Title = $param1 $web.Update() $web.Dispose() ", null, "New title", "not used"); Next step: Connect the code to the list, or more specifically, have the code execute on one (or several) list items. As there are more options than readers, I'll leave this as an exercise for the reader. Some alternatives: Create a ribbon button that calls RunPowershell with the body of the selected itemsAdd a layout pageSpecify list item from query string (possibly coupled with content editor webpart with html that links directly to this page with querystring)WebpartListing with an "execute" columnList with multiselect and an execute button Etc!Now that you have the code for executing powershell scripts, you can easily expand this into a timer job, which executes scripts at regular intervals. But if the previous solution was dangerous, this is even worse - the scripts will usually be run with one of the admin accounts, and can do pretty much anything...One more thing... Note that as this is running "consoleless" calls to Write-Host will fail. Two solutions; remove all output, or check if the script is run in a console-window or not.  if ($host.Name -eq "ConsoleHost") { Write-Host 'If I agreed with you we'd both be wrong' }

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Juggling with JDKs on Apple OS X

    - by Blueberry Coder
    I recently got a shiny new MacBook Pro to help me support our ADF Mobile customers. It is really a wonderful piece of hardware, although I am still adjusting to Apple's peculiar keyboard layout. Did you know, for example, that the « delete » key actually performs a « backspace »? But I disgress... As you may know, ADF Mobile development still requires JDeveloper 11gR2, which in turn runs on Java 6. On the other hand, JDeveloper 12c needs JDK 7. I wanted to install both versions, and wasn't sure how to do it.   If you remember, I explained in a previous blog entry how to install JDeveloper 11gR2 on Apple's OS X. The trick was to use the /usr/libexec/java_home command in order to invoke the proper JDK. In this case, I could have done the same thing; the two JDKs can coexist without any problems, since they install in completely different locations. But I wanted more than just installing JDeveloper. I wanted to be able to select my JDK when using the command line as well. On Windows, this is easy, since I keep all my JDKs in a central location. I simply have to move to the appropriate folder or type the folder name in the command I want to execute. Problem is, on OS X, the paths to the JDKs are... let's say convoluted.  Here is the one for Java 6. /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home The Java 7 path is not better, just different. /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home Intuitive, isn't it? Clearly, I needed something better... On OS X, the default command shell is bash. It is possible to configure the shell environment by creating a file named « .profile » in a user's home folder. Thus, I created such a file and put the following inside: export JAVA_7_HOME=$(/usr/libexec/java_home -v1.7) export JAVA_6_HOME=$(/usr/libexec/java_home -v1.6) export JAVA_HOME=$JAVA_7_HOME alias java6='export JAVA_HOME=$JAVA_6_HOME' alias java7='export JAVA_HOME=$JAVA_7_HOME'  The first two lines retrieve the current paths for Java 7 and Java 6 and store them in two environment variables. The third line marks Java 7 as the default. The last two lines create command aliases. Thus, when I type java6, the value for JAVA_HOME is set to JAVA_6_HOME, for example.  I now have an environment which works even better than the one I have on Windows, since I can change my active JDK on a whim. Here a sample, fresh from my terminal window. fdesbien-mac:~ fdesbien$ java6 fdesbien-mac:~ fdesbien$ java -version java version "1.6.0_65" Java(TM) SE Runtime Environment (build 1.6.0_65-b14-462-11M4609) Java HotSpot(TM) 64-Bit Server VM (build 20.65-b04-462, mixed mode) fdesbien-mac:~ fdesbien$ fdesbien-mac:~ fdesbien$ java7 fdesbien-mac:~ fdesbien$ java -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) fdesbien-mac:~ fdesbien$ Et voilà! Maximum flexibility without downsides, just I like it. 

    Read the article

  • Intern Screening - Software 'Quiz'

    - by Jeremy1026
    I am in charge of selecting a new software development intern for a company that I work with. I wanted to throw a little 'quiz' at the applicants before moving forth with interviews so as to weed out the group a little bit to find some people that can demonstrate some skill. I put together the following quiz to send to applicants, it focuses only on PHP, but that is because that is what about 95% of the work will be done in. I'm hoping to get some feedback on A. if its a good idea to send this to applicants and B. if it can be improved upon. # 1. FizzBuzz # Write a small application that does the following: # Counts from 1 to 100 # For multiples of 3 output "Fizz" # For multiples of 5 output "Buzz" # For multiples of 3 and 5 output "FizzBuzz" # For numbers that are not multiples of 3 nor 5 output the number. <?php ?> # 2. Arrays # Create a multi-dimensional array that contains # keys for 'id', 'lot', 'car_model', 'color', 'price'. # Insert three sets of data into the array. <?php ?> # 3. Comparisons # Without executing the code, tell if the expressions # below will return true or false. <?php if ((strpos("a","abcdefg")) == TRUE) echo "True"; else echo "False"; //True or False? if ((012 / 4) == 3) echo "True"; else echo "False"; //True or False? if (strcasecmp("abc","ABC") == 0) echo "True"; else echo "False"; //True or False? ?> # 4. Bug Checking # The code below is flawed. Fix it so that the code # runs properly without producing any Errors, Warnings # or Notices, and returns the proper value. <?php //Determine how many parts are needed to create a 3D pyramid. function find_3d_pyramid($rows) { //Loop through each row. for ($i = 0; $i < $rows; $i++) { $lastRow++; //Append the latest row to the running total. $total = $total + (pow($lastRow,3)); } //Return the total. return $total; } $i = 3; echo "A pyramid consisting of $i rows will have a total of ".find_3d_pyramid($i)." pieces."; ?> # 5. Quick Examples # Create a small example to complete the task # for each of the following problems. # Create an md5 hash of "Hello World"; # Replace all occurances of "_" with "-" in the string "Welcome_to_the_universe." # Get the current date and time, in the following format, YYYY/MM/DD HH:MM:SS AM/PM # Find the sum, average, and median of the following set of numbers. 1, 3, 5, 6, 7, 9, 10. # Randomly roll a six-sided die 5 times. Store the 5 rolls into an array. <?php ?>

    Read the article

  • Video works with 'Try me' but not after install. What is the difference? U12.04LTS,

    - by HarveyP
    My hard drive got corrupted so I did a reinstall. Tested Youtube in FF during 'try me' and it worked - jerky, but it worked. Instal without all the updates (576 outstanding now) in order to get ff installed as per the demo - to no avail. In 'try me' mode ff NEVER crashed! After install ff crashed whilst I was typing in 'youtube' in the address field. When I finally got to youtube - no video. What is the difference between ff in try me and ff after install? Off to try some selected updates now to see if I can see it for myself. In previous installation I had several profiles and aliased ff with -safe-mode switch to simplify startup of most stable ff. Also found that ff startup in graphic mode worked better (but still without video) with all of the extensions disabled and all of the plugins set to "ask" and always denied ... I have SiS graphic card in SiS Motherboard for XP and ancient Hyundai ImageQuest QV770 monitor. I have Ubuntu 12.04.01 LTS 1 day after install with only the immediate upgrades requested to language pack (English UK). Using FR Alternative keyboard. Connected with domestic wifi network from Orange (FT) I really want to use Skype, but won't bother installing it (again) without video as I can do my sms on FB - whilst ff is not crashed ... Update ... Is something overflowing? I have just had to reboot in order to get ff to restart in any way shape or form - restart on crash form generates new crash form, etc. It was however a good half hour before it crashed so some improvement over conditions before disk corruption. I have now installed all of the critical updates (332 recommended updates still outstanding) which included some relating to ff. Still no video. Still crashing - especially when on Grepolis website. Since the re-install I have had a lovely 1024x768 screen, but after last ff crash and reboot I got a message about 'low graphics mode' and 'setting things myself'. I was not sufficiently tuned in at the time to take proper note - I have no doubt I shall see it again and shall report accordingly. I still have only laptop options for my screen and do not know how to rectify this. Spent a few days with ubuntu on a different, newer machine which has now suffered a graphics breakdown. Returned to this old one again, but with new flat screen Monitor. Found SIS drivers for my graphics BUT it is intended for Red Hat 7.2. I chose this over the version for 7.0 because I thought what the hell, I might not be able to do anything with either of them but this is the later one ... The file will not open with software manager - found a similar problem on Overclock but it has not helped me to install this driver. File name is sis_drv.o-410 and it is currently idling away in my Downloads folder ... I have tried the solution offered on another sis problem, but this shows that my xserver-xorg-video-sis driver is up to date. I am now at a loss as to how to proceed if I can't install the latest sis driver from sis ... Does nobody know how FF changes from "try-me" to "installed"? Any time I MUST have video I reboot from the disk again, but this is tedious! Also one of the things I mock most about MS is the constant rebooting ... UPDATE 10/6/2014 I have installed chromium-browser - worse, crashes even more often than ff.I have installed epiphany - better; Video works but not the associated soundtrack.FireFox is version 14.01 in 'try me' and version 29.0 from my install. Would it be useful to try to downgrade FireFox in order to get video?

    Read the article

  • Application Logging needs work

    Application Logging Application logging is the act of logging events that occur within an application much like how a court report documents what happens in court case. Application logs can be useful for several reasons, but the most common use for logs is to recreate steps to find the root cause of applications errors. Other uses can include the detection of Fraud, verification of user activity, or provide audits on user/data interactions. “Logs can contain different kinds of data. The selection of the data used is normally affected by the motivation leading to the logging. “ (OWASP, 2009) OWASP also stats that logging include applicable debugging information like the event date time, responsible process, and a description of the event. “There are many reasons why a logging system is a necessary part of delivering a distributed application. One of the most important is the ability to track exactly how many users are using the application during different time periods.” (Hatton, 2000) Hatton also states that application logging helps system designers determine whether parts of an application aren't being used as designed. He implies that low usage can be used to identify if users like or do not like aspects of a system based on user usage of the application. This enables application designers to extract why users don't like aspects of an application so that changes can be made to increase its usefulness and effectiveness. “Logging memory usage can also assist you in tuning up the internals of your application. If you're experiencing a randomly occurring problem, being able to match activities performed with the memory status at the time may enable you to discover the cause of the problem. It also gives you a good indication of the health of the distributed server machine at the time any activity is performed. “ (Hatton, 2000) Commonly Logged Application Events (Defined by OWASP) Access of Data Creation of Data Modification of Data in any form Administrative Functions  Configuration Changes Debugging Information(Application Events)  Authorization Attempts  Data Deletion Network Communication  Authentication Events  Errors/Exceptions Application Error Logging The functionality associated with application error logging is actually the combination of proper error handling and applications logging.  If we look back at Figure 4 and Figure 5, these code examples allow developers to handle various types of errors that occur within the life cycle of an application’s execution. Application logging can be applied within the Catch section of the TryCatch statement allowing for the errors to be logged when they occur. By placing the logging within the Catch section specific error details can be accessed that help identify the source of the error, the path to the error, what caused the error and definition of the error that occurred. This can then be logged and reviewed at a later date in order recreate the error that was received based data found in the application log. By allowing applications to log errors developers IT staff can use them to recreate errors that are encountered by end-users or other dependent systems.

    Read the article

  • Cloud Adoption Challenges

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2013/11/07/cloud-adoption-challenges.aspxWhile cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization. Cloud First, Burst? In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both.  What’s the Problem? Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption. Performance or Scalability? I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption. Uptime? Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • SQL Server: Writing CASE expressions properly when NULLs are involved

    - by Mladen Prajdic
    We’ve all written a CASE expression (yes, it’s an expression and not a statement) or two every now and then. But did you know there are actually 2 formats you can write the CASE expression in? This actually bit me when I was trying to add some new functionality to an old stored procedure. In some rare cases the stored procedure just didn’t work correctly. After a quick look it turned out to be a CASE expression problem when dealing with NULLS. In the first format we make simple “equals to” comparisons to a value: SELECT CASE <value> WHEN <equals this value> THEN <return this> WHEN <equals this value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Second format is much more flexible since it allows for complex conditions. USE THIS ONE! SELECT CASE WHEN <value> <compared to> <value> THEN <return this> WHEN <value> <compared to> <value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Now that we know both formats and you know which to use (the second one if that hasn’t been clear enough) here’s an example how the first format WILL make your evaluation logic WRONG. Run the following code for different values of @i. Just comment out any 2 out of 3 “SELECT @i =” statements. DECLARE @i INTSELECT  @i = -1 -- first resultSELECT  @i = 55 -- second resultSELECT  @i = NULL -- third resultSELECT @i AS OriginalValue, -- first CASE format. DON'T USE THIS! CASE @i WHEN -1 THEN '-1' WHEN NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS DontUseThisCaseFormatValue, -- second CASE format. USE THIS! CASE WHEN @i = -1 THEN '-1' WHEN @i IS NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS UseThisCaseFormatValue When the value of @i is –1 everything works as expected, since both formats go into the –1 WHEN branch. When the value of @i is 55 everything again works as expected, since both formats go into the ELSE branch. When the value of @i is NULL the problems become evident. The first format doesn’t go into the WHEN NULL branch because it makes an equality comparison between two NULLs. Because a NULL is an unknown value: NULL = NULL is false. That is why the first format goes into the ELSE Branch but the second format correctly handles the proper IS NULL comparison.   Please use the second more explicit format. Your future self will be very grateful to you when he doesn’t have to discover these kinds of bugs.

    Read the article

  • EPPM Is a Must-Have Capability as Global Energy and Power Industries Eye US$38 Trillion in New Investments

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “The process manufacturing industry is facing an unprecedented challenge: from now until 2035, cumulative worldwide investments of US$38 trillion will be required for drilling, power generation, and other energy projects,” Iain Graham, director of energy and process manufacturing for Oracle’s Primavera, said in a recent webcast. He adds that process manufacturing organizations such as oil and gas, utilities, and chemicals must manage this level of investment in an environment of constrained capital markets, erratic supply and demand, aging infrastructure, heightened regulations, and declining global skills. In the following interview, Graham explains how the right enterprise project portfolio management (EPPM) technology can help the industry meet these imperatives. Q: Why is EPPM so important for today’s process manufacturers? A: If the industry invests US$38 trillion without proper cost controls in place, a huge amount of resources will be put at risk, especially when it comes to cost overruns that may occur in large capital projects. Process manufacturing companies must not only control costs, but also monitor all the various contractors that will be involved in each project. If you’re not managing your own workers and all the interdependencies among the different contractors, then you’ve got problems. Q: What else should process manufacturers look for? A: It’s also important that an EPPM solution has the ability to manage more than just capital projects. For example, it’s best to manage maintenance and capital projects in the same system. Say you’re due to install a new transformer in a power station as part of a capital project, but routine maintenance in that area of the facility is scheduled for that morning. The lack of coordination could lead to unforeseen delays. There are also IT considerations that impact capital projects, such as adding servers and network cable for a control system in a power station. What organizations need is a true EPPM system that’s not just for capital projects, maintenance, or IT activities, but instead an enterprisewide solution that provides visibility into all types of projects. Read the complete Q&A here and discover the practical framework for successfully managing this massive capital spending.

    Read the article

  • How to get the height of an image and apply that height to a div? [migrated]

    - by Mick79
    I am building a mobile web app and I'm using jquerytools slider on it. i want te slider to show (in proper ratio) across all mobile devices so width of the images is 100% and height is auto in css. However as all the elements are floated and jquerytools slider requires the position be set to absolute, the containing div (#header) doesn't stretch to fit the content. I am trying to use jquery to get the height of the height of the img and apply that height to the header.... however I am having no luck. CSS: #header{ width:100%; position:relative; z-index: 20; /* box-shadow: 0 0 10px white; */ overflow: auto; } .scrollable { position:relative; overflow:hidden; width: 100%; height: 100%; /* box-shadow: 0 0 20px purple; */ /* height:198px; */ z-index: 20; overflow: auto; } .scrollable .items { /* this cannot be too large */ width:1000%; position:absolute; clear:both; /* box-shadow: 0 0 30px green; */ } .items div { float:left; width:10%; height:100%; } /* single scrollable item */ .scrollable img { /* float:left; */ width:100%; height: auto; /* height:198px; */ } /* active item */ .scrollable .active { border:2px solid #000; position:relative; cursor:default; } HTML <div id=header><!-- root element for scrollable --> <div class="scrollable" id="scrollable"> <!-- root element for the items --> <div class="items"> <div> <img src="img/img2.jpg" /> </div> <div> <img src="img/img1.jpg" /> </div> <div> <img src="img/img3.jpg" /> </div> <div> <img src="img/img4.jpg" /> </div> <div> <img src="img/img6.jpg" /> </div> </div><!-- items --> </div><!-- scrollable --> </div><!-- header -->

    Read the article

  • Why is 0 false?

    - by Morwenn
    This question may sound dumb, but why does 0 evaluates to false and any other [integer] value to true is most of programming languages? String comparison Since the question seems a little bit too simple, I will explain myself a little bit more: first of all, it may seem evident to any programmer, but why wouldn't there be a programming language - there may actually be, but not any I used - where 0 evaluates to true and all the other [integer] values to false? That one remark may seem random, but I have a few examples where it may have been a good idea. First of all, let's take the example of strings three-way comparison, I will take C's strcmp as example: any programmer trying C as his first language may be tempted to write the following code: if (strcmp(str1, str2)) { // Do something... } Since strcmp returns 0 which evaluates to false when the strings are equal, what the beginning programmer tried to do fails miserably and he generally does not understand why at first. Had 0 evaluated to true instead, this function could have been used in its most simple expression - the one above - when comparing for equality, and the proper checks for -1 and 1 would have been done only when needed. We would have considered the return type as bool (in our minds I mean) most of the time. Moreover, let's introduce a new type, sign, that just takes values -1, 0 and 1. That can be pretty handy. Imagine there is a spaceship operator in C++ and we want it for std::string (well, there already is the compare function, but spaceship operator is more fun). The declaration would currently be the following one: sign operator<=>(const std::string& lhs, const std::string& rhs); Had 0 been evaluated to true, the spaceship operator wouldn't even exist, and we could have declared operator== that way: sign operator==(const std::string& lhs, const std::string& rhs); This operator== would have handled three-way comparison at once, and could still be used to perform the following check while still being able to check which string is lexicographically superior to the other when needed: if (str1 == str2) { // Do something... } Old errors handling We now have exceptions, so this part only applies to the old languages where no such thing exist (C for example). If we look at C's standard library (and POSIX one too), we can see for sure that maaaaany functions return 0 when successful and any integer otherwise. I have sadly seen some people do this kind of things: #define TRUE 0 // ... if (some_function() == TRUE) { // Here, TRUE would mean success... // Do something } If we think about how we think in programming, we often have the following reasoning pattern: Do something Did it work? Yes -> That's ok, one case to handle No -> Why? Many cases to handle If we think about it again, it would have made sense to put the only neutral value, 0, to yes (and that's how C's functions work), while all the other values can be there to solve the many cases of the no. However, in all the programming languages I know (except maybe some experimental esotheric languages), that yes evaluates to false in an if condition, while all the no cases evaluate to true. There are many situations when "it works" represents one case while "it does not work" represents many probable causes. If we think about it that way, having 0 evaluate to true and the rest to false would have made much more sense. Conclusion My conclusion is essentially my original question: why did we design languages where 0 is false and the other values are true, taking in account my few examples above and maybe some more I did not think of? Follow-up: It's nice to see there are many answers with many ideas and as many possible reasons for it to be like that. I love how passionate you seem to be about it. I originaly asked this question out of boredom, but since you seem so passionate, I decided to go a little further and ask about the rationale behind the Boolean choice for 0 and 1 on Math.SE :)

    Read the article

  • Border image on UIView

    - by drunknbass
    I want to have a UIView subclass that has a border image, but i dont want or care about this 'new' frame/bounds around the border image itself. What i wanted to do was just use drawRect and draw outside of the rect but all drawing is clipped and i dont see a way to not clip drawing outside of this context rect. So now i have added a sublayer to the views layer, set [self clipsToBounds] on the view and override setFrame to control my sublayers frame and always keep it at the proper size (spilling over the views frame by 40px) the problem with this is that setFrame on a uiview by default has no animation but seTFrame on a calayer does. i cant just disable the animations on the calayers setFrame because if i were to call setFrame on the uiview inside a uiview animation block the calayer would still have its animation disabled. the obvious solution is to look up the current animationDuration on the uiview animation and set a matching animation on the sublayer, but i dont know if this value is available. And even if it is, im afraid that calling an animation from within another animation is wrong. Unfortunately the best solution is to not use a calayer at all and just add a uiview as a subview and draw into that just like i am drawing into my layer, and hope that with autoresizingMask set to height and width that everything will 'just work'. Just seems like unnecessary overhead for such a simple task.

    Read the article

  • glm matrix conversion for DirectX

    - by niktehpui
    For on of the coursework specification I need to work with DirectX, so I tried to implement a DirectX Renderer in my small cross-platform framework (to have it optionally available for Windows). Since I want to stick to my dependencies I want use glm for vector/matrix/quaternions math. The vectors seem to be fully compatible with DirectX, but the glm::mat4 is not working properly in DirectX Effects Framework. I assumed the reason is that DirectX uses row majors layouts and OpenGL column majors (although if I remember right internally in HLSL DX uses column major as well), so I transposed the matrix, but I still get no proper results compared to using XNA-Math. XNA-Version of the code (works): XMMATRIX world = XMMatrixIdentity(); XMMATRIX view = XMMatrixLookAtLH(XMVectorSet(5.0, 5.0, 5.0, 1.0f), XMVectorZero(), XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f)); XMMATRIX proj = XMMatrixPerspectiveFovLH(0.25f*3.14f, 1.25f, 1.0f, 1000.0f); XMMATRIX worldViewProj = world*view*proj; m_fxWorldViewProj->SetMatrix(reinterpret_cast<float*>(&worldViewProj)); This works flawlessly and displays the expected colored cube. GLM-Version (does not work): glm::mat4 world(1.0f); glm::mat4 view = glm::lookAt(glm::vec3(5.0f, 5.0f, 5.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f)); glm::mat4 proj = glm::perspective(0.25f*3.14f, 1.25f, 1.0f, 1000.0f); glm::mat4 worldViewProj = glm::transpose(world*view*proj); m_fxWorldViewProj->SetMatrix(glm::value_ptr(worldViewProj)); Displays nothing, screen stays black. I really would like to stick to glm on all platforms.

    Read the article

  • Using Visual Studio 2008 to Assemble, Link, Debug, and Execute MASM 6.11 Assembly Code

    - by Kreychek
    I would like to use Visual Studio 2008 to the greatest extent possible while effectively compiling/linking/building/etc code as if all these build processes were being done by the tools provided with MASM 6.11. The exact version of MASM does not matter, so long as it's within the 6.x range, as that is what my college is using to teach 16-bit assembly. I have done some research on the subject and have come to the conclusion that there are several options: Reconfigure VS to call the MASM 6.11 executables with the same flags, etc as MASM 6.11 would natively do. Create intermediary batch file(s) to be called by VS to then invoke the proper commands for MASM's linker, etc. Reconfigure VS's built-in build tools/rules (assembler, linker, etc) to provide an environment identical to the one used by MASM 6.11. Option (2) was brought up when I realized that the options available in VS's "External Tools" interface may be insufficient to correctly invoke MASM's build tools, thus a batch file to interpret VS's strict method of passing arguments might be helpful, as a lot of my learning about how to get this working involved my manually calling ML.exe, LINK.exe, etc from the command prompt. Below are several links that may prove useful in answering my question. Please keep in mind that I have read them all and none are the actual solution. I can only hope my specifying MASM 6.11 doesn't prevent anyone from contributing a perhaps more generalized answer. Similar method used to Option (2), but users on the thread are not contactable: http://www.codeguru.com/forum/archive/index.php/t-284051.html (also, I have my doubts about the necessity of an intermediary batch file) Out of date explanation to my question: http://www.cs.fiu.edu/~downeyt/cop3402/masmaul.html Probably the closest thing I've come to a definitive solution, but refers to a suite of tools from something besides MASM, also uses a batch file: http://www.kipirvine.com/asm/gettingStarted/index.htm#16-bit I apologize if my terminology for the tools used in each step of the code - exe process is off, but since I'm trying to reproduce the entirety of steps in between completion of writing the code and generating an executable, I don't think it matters much.

    Read the article

  • Conditionally add htmlAttributes to ASP.NET MVC Html.ActionLink

    - by macca1
    I'm wondering if it's possible to conditionally add a parameter in a call to a method. For example, I am rendering a bunch of links (six total) for navigation in my Site.Master: <%= Html.ActionLink("About", "About", "Pages") %> | <%= Html.ActionLink("Contact", "Contact", "Pages") %> <%-- etc, etc. --%> I'd like to include a CSS class of "selected" for the link if it's on that page. So in my controller I'm returning this: ViewData.Add("CurrentPage", "About"); return View(); And then in the view I have an htmlAttributes dictionary: <% Dictionary<string,object> htmlAttributes = new Dictionary<string,object>(); htmlAttributes.Add("class","selected");%> Now my only question is how do I include the htmlAttributes for the proper ActionLink. I could do it this way for each link: <% htmlAttributes.Clear(); if (ViewData["CurrentPage"] == "Contact") htmlAttributes.Add("class","selected");%> <%= Html.ActionLink("Contact", "Contact", "Pages", htmlAttributes) %> But that seems a little repetitive. Is there some way to do something like this psuedo code: <%= Html.ActionLink("Contact", "Contact", "Pages", if(ViewData["CurrentPage"] == "Contact") { htmlAttributes }) %> That's obviously not valid syntax, but is there a correct way to do that? I'm open to any totally different suggestions for rendering these links. I'd like to stay with something like ActionLink that takes advantage of using my routes though instead of hard coding the tag.

    Read the article

  • sort django queryset by latest instance of a subset of related model

    - by rsp
    I have an Order model and order_event model. Each order_event has a foreignkey to order. so from an order instance i can get: myorder.order_event_set. I want to get a list of all orders but i want them to be sorted by the date of the last event. A statement like this works to sort by the latest event date: queryset = Order.objects.all().annotate(latest_event_date=Max('order_event__event_datetime')).order_by('latest_event_date') However, what I really need is a list of all orders sorted by latest date of A SUBSET OF EVENTS. For example my events are categorized into "scheduling", "processing", etc. So I should be able to get a list of all orders sorted by the latest scheduling event. This django doc (https://docs.djangoproject.com/en/dev/topics/db/aggregation/#filter-and-exclude) shows how I can get the latest schedule event using a filter but this excludes orders without a scheduling event. I thought I could combine the filtered queryset with a queryset that includes back those orders that are missing a scheduling event...but I'm not quite sure how to do this. I saw answers related to using python list but it would be much more useful to have a proper django queryset (ie for a view with pagination, etc.)

    Read the article

  • winforms databinding best practices

    - by Kaiser Soze
    Demands / problems: I would like to bind multiple properties of an entity to controls in a form. Some of which are read only from time to time (according to business logic). When using an entity that implements INotifyPropertyChanged as the DataSource, every change notification refreshes all the controls bound to that data source (easy to verify - just bind two properties to two controls and invoke a change notification on one of them, you will see that both properties are hit and reevaluated). There should be user friendly error notifications (the entity implements IDataErrorInfo). (probably using ErrorProvider) Using the entity as the DataSource of the controls leads to performance issues and makes life harder when its time for a control to be read only. I thought of creating some kind of wrapper that holds the entity and a specific property so that each control would be bound to a different DataSource. Moreover, that wrapper could hold the ReadOnly indicator for that property so the control would be bound directly to that value. The wrapper could look like this: interface IPropertyWrapper : INotifyPropertyChanged, IDataErrorInfo { object Value { get; set; } bool IsReadOnly { get; } } But this means also a different ErrorProvider for each property (property wrapper) I feel like I'm trying to reinvent the wheel... What is the 'proper' way of handling complex binding demands like these? Thanks ahead.

    Read the article

  • Add an array of buttons to a GridView in an Android application

    - by Tai Squared
    I have an application that will have 5-15 buttons depending on what is available from a backend. How do I define the proper GridView layout files to include an array of buttons that will each have different text and other attributes? Each button will essentially add an item to a cart, so the onClick code will be the same except for the item it adds to the cart. How can I define an array so I can add a variable number of buttons, but still reference each of them by a unique ID? I've seen examples of the arrays.xml, but they have created an array of strings that are pre-set. I need a way to create an object and not have the text defined in the layout or arrays xml file. Update - Added info about adding to a GridView I want to add this to a GridView, so calling the addView method results in an UnsupportedOperationException. I can do the following: ImageButton b2 = new ImageButton(getApplicationContext()); b2.setBackgroundResource(R.drawable.img_3); android.widget.LinearLayout container = (android.widget.LinearLayout) findViewById(R.id.lay); container.addView(b2); but that doesn't layout the buttons in a grid like I would like. Can this be done in a GridView?

    Read the article

  • EXT-js PropertyGrid best practices to achieve an update ?

    - by Tom
    Hello, I am using EXT-js for a project, usually everything is pretty straight forward with EXT-js, but with the propertyGrid, I am not sure. I'd like some advice about this piece of code. First the store to populate the property grid, on the load event: var configStore = new Ext.data.JsonStore({ // store config autoLoad:true, url: url.remote, baseParams : {xaction : 'read'}, storeId: 'configStore', // reader config idProperty: 'id_config', root: 'config', totalProperty: 'totalcount', fields: [{ name: 'id_config' }, { name: 'email_admin' } , { name: 'default_from_addr' } , { name: 'default_from_name' } , { name: 'default_smtp' } ],listeners: { load: { fn: function(store, records, options){ // get the property grid component var propGrid = Ext.getCmp('propGrid'); // make sure the property grid exists if (propGrid) { // populate the property grid with store data propGrid.setSource(store.getAt(0).data); } } } } }); here is the propertyGrid: var propsGrid = new Ext.grid.PropertyGrid({ renderTo: 'prop-grid', id: 'propGrid', width: 462, autoHeight: true, propertyNames: { tested: 'QA', borderWidth: 'Border Width' }, viewConfig : { forceFit: true, scrollOffset: 2 // the grid will never have scrollbars } }); So far so good, but with the next button, I'll trigger an old school update, and my question : Is that the proper way to update this component ? Or is it better to user an editor ? or something else... for regular grid I use the store methods to do the update, delete,etc... The examples are really scarce on this one! Even in books about ext-js! new Ext.Button({ renderTo: 'button-container', text: 'Update', handler: function(){ var grid = Ext.getCmp("propGrid"); var source = grid.getSource(); var jsonDataStr = null; jsonDataStr = Ext.encode(source); var requestCg = { url : url.update, method : 'post', params : { config : jsonDataStr , xaction : 'update' }, timeout : 120000, callback : function(options, success, response) { alert(success + "\t" + response); } }; Ext.Ajax.request(requestCg); } }); and thanks for reading.

    Read the article

  • Using Word COM objects in .NET, WinWord process won't quit after calling Word.Documents.Add

    - by Keith
    I'm running into the classic scenario where, when creating Word COM objects in .NET (via the Microsoft.Office.Interop.Word assembly), the WinWord process won't exit even though I'm properly closing and releasing the objects. I've narrowed it down to the use of the Word.Documents.Add() method. I can work with Word in other ways without a problem (opening documents, modifying contents, etc) and WinWord.exe quits when I tell it to. It's once I use the Add() method that the process is left running. Here is a simple example which reproduces the problem: Dim oWord As New Word.Application() oWord.Visible = False Dim oDocuments As Word.Documents = oWord.Documents Dim oDoc As Word.Document = oDocuments.Add(Template:=CObj(sTemplatePath), NewTemplate:=False, DocumentType:=Word.WdNewDocumentType.wdNewBlankDocument, Visible:=False) ' dispose objects oDoc.Close() While (Marshal.ReleaseComObject(oDoc) < 0) End While oDoc = Nothing oWord.Quit() While (Marshal.ReleaseComObject(oWord) < 0) End While oWord = Nothing As you can see I'm creating and disposing the objects properly, even taking the extra step to loop Marsha.ReleaseComObject until it returns the proper code. Working with the Word objects is fine in other regards, it's just that pesky Documents.Add that is causing me grief. Is there another object that gets created in this process that I need to reference and dispose of? Is there another disposal step I need to follow? Something else? Your help is much appreciated :)

    Read the article

  • iPhone: How to Display Text from UIWebView HTML Document in a UITextView

    - by ArgMan
    I have an RSS feed that gets arranged in a UITableView which lets the user select a story that loads in a UIWebView. However, I'd like to stop using the UIWebView and just use a UITextView or UILabel. This png is what I am trying to do (just display the various text aspects of a news story): I have tried using: NSString *myText = [webView stringByEvaluatingJavaScriptFromString:@"document.documentElement.textContent"]; and assigning the string to a UILabel but it doesn't work from where I am implementing it in webViewDidFinishLoad (--is that not the proper place?). I get a blank textView and normal webView. If I overlay a UITextView on top of a UIWebView on its own (that is, a webView that just loads one page), the code posted above works displays the text fine. The problem arises when I try to process the RSS feed . I've been stuck wondering why this doesn't work as it should for a few days now. If you have a better, more efficient way of doing it then placing the code in webViewDidFinishLoad, please let me know! Does it go in my didSelectRowAtIndexPath? Thank you very much in advance!

    Read the article

  • Sharepoint - Passing parameters in URL to NewForm.aspx

    - by kevin
    Any suggestions would be great. I've inherited a system and have been requested to add a context menu item to allow the ability to add a new item. I've set up the context menu with the new option and the newform.aspx to accept and pull parameters from the url for populating some fields. The context menu was created with the content editor web part and the follow JavaScript. function Custom_AddDocLibMenuItems(m, ctx) { var strAction = "window.location='http://address?para1=....'"; var strDisplayText = "Taxonomy"; var strImagePath = ""; // Add our new menu item CAMOpt(m, strDisplayText, strAction, strImagePath); // add a separator to the menu CAMSep(m); // false means that the standard menu items should also be rendered return false; } I'm having difficulties getting the proper values in order to concatenate them to the strAction string (which would be the full url of the newform.aspx. Any suggestions on how I retrieve the values of the columns for the item that the user right click to get context menu.

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >