Search Results

Search found 34836 results on 1394 pages for 'off the shelf software'.

Page 352/1394 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • PaaS, DBaaS and the Oracle Database Cloud Service

    - by yaldahhakim
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As with many widely hyped areas, there is much more variation within the broad spectrum of products referred to as “Cloud” that is immediately apparent. This variation is evident in one of the key misunderstandings about the Oracle Database Cloud Service. People could be forgiven for thinking that the Database Cloud Service was a Database-as-a-Service (DBaaS), but this is actually not true. The Database Cloud Service is a Platform-as-a-Service, which presents a different user and developer interface and has a different set of qualities. A good way to think about the difference between these two varieties of Cloud offerings is that you, the customer, have to deal with things at the level of the offering, but not for anything below it. In practice, this means that you do not have to deal with hardware or system software, including installation and maintenance, for DBaaS. You also do not have much control over configuration of these options. For PaaS, you don’t have to deal with hardware, system software, or database software – and also do not have control over these levels in the stack. So you cannot modify configuration parameters for the database with the Database Cloud Service – your interface is through SQL and PL/SQL, with Application Express, included in the Database Cloud Service, or through JDBC for Java apps running in the Java Cloud Service, or through RESTful Web Services. You will notice what is not mentioned there – SQL*Net. You cannot access your Oracle Database Cloud Service by changing an entry in the TNSNames file and using SQL*Net. So the effort involved in migrating an existing Oracle Database in your data center to the Database Cloud Service may be prohibitive. The good news is that Application Express and the RESTful Web Services wizard in the Database Cloud Service allow you to develop new applications very quickly, and, of course, the provisioning of the entire Database Cloud Service takes only minutes.

    Read the article

  • Passed: Exam 70-480: Programming in HTML5 with JavaScript and CSS3

    First off: Mission accomplished successfully. And it was fun! Using the resources listed in my previous article about Learning Content, I'd like to thank Microsoft Technical Evangelists Jeremy Foster and Michael Palermo for their excellent jump start videos on Channel 9, and the various authors at Pluralsight. Local Prometric testing centre Back in November I chose a local testing centre which was the easiest to access from my office despite the horrible traffic you might experience here on the island. Actually, it was not the closest one. But due to their website, their awards as Microsoft Learning Center, and my general curiosity about the premises, I gave FRCI my priority. Boy, how should I regret this decision this morning... The official Prometric exam guide asks any attendee to show up at least 30 minutes prior to the scheduled time of the test. Well, this should have been the easier part but unfortunately due to heavier traffic than usual I arrived only 20 minutes before time. Not too bad but more to come. The building called 'le Hub' is nicely renovated and provides the right environment for an IT group of companies like FRCI. I think they have currently 5 independent IT departments over there. Even the handling at the reception was straight forward, welcoming and at my ease. But then... first shock: "We don't have any exam registration for today." - Hm, that's nice... Here's my mail confirmation from Prometric. First attack successfully handled and the lady went off again to check their records. Next shock: A couple of minutes later, another guy tries to explain me that "the staff of the testing centre is already on vacation and the centre is officially closed." - Are you kidding me? Here's the official confirmation by Prometric, and I don't find it funny that I take a day off today only to hear this kind of blubbering nonsense. I thought that I'll be on the safe side choosing a company with a good reputation here on the island. Another 40 (!) minutes later, they finally come back to the waiting area with a pre-filled form about the test appointment. And finally, after an hour of waiting, discussing, restarting the testing PC, and lots of talk, I am allowed to sit down and take the exam. Exam details Well, you know the rules. Signing an NDA doesn't allow me to provide you any details about the questions or topics that have been covered. Please check out the official exam description, and you're on the right way. Sorry, guys... ;-) The result "Congratulations! You have passed this Microsoft Certification exam." - In general, I have to admit that the parts on HTML5 and CSS3 were the easiest after all, and that I have to get myself a little bit more familiar with certain Javascript features like class definitions, inheritance and data security. Anyway, exam passed - who cares about the details? Next goal Of course, the journey to Microsoft Certifications continues and my next goal is to pass exams 70-481 - Essentials of Developing Windows Store Apps using HTML5 and JavaScript and 70-482 - Advanced Windows Store App Development using HTML5 and JavaScript. This would allow me to achieve the certification of MCSD: Windows Store Apps using HTML5. I guess, during 2013 I'll be busy with various learning and teaching lessons.

    Read the article

  • What's wrong with my wireless?

    - by dazzle
    I am having issues with my wireless connection. My connection is constantly disconnecting, then attempting to reconnect, reconnecting momentarily, then disconnecting etc. on times scales that range from seconds to minutes. In the meantime, needless to say I'm having significant packet loss. I'm running Ubuntu 14.04 64bit, updated and upgraded to today. Here is my card and driver: delta@sager:~$ lspci -vq | grep -i wireless -B 1 -A 5 04:00.0 Network controller: Intel Corporation Wireless 7260 (rev 73) Subsystem: Intel Corporation Dual Band Wireless-AC 7260 Flags: bus master, fast devsel, latency 0, IRQ 47 Memory at f7d00000 (64-bit, non-prefetchable) [size=8K] Capabilities: Kernel driver in use: iwlwifi Here is my kernel: delta@sager:~$ uname -r 3.13.0-34-generic None of the other machines on my home network are having these issues. Windows Vista is networking without issue for goodness sake ;-) Here is a small clipping from the output of dmesg. As you can see, I am getting a cfg80211 message of some sort over and over again (FYI, I've replaced my MAC address with a series of dashes, so anytime there is a ---------------, that was where the MAC address was: [ 1881.739161] wlan1: authenticate with --------------- [ 1881.741561] wlan1: send auth to --------------- (try 1/3) [ 1881.743440] wlan1: authenticated [ 1881.746027] wlan1: associate with --------------- (try 1/3) [ 1881.749244] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1881.754727] wlan1: associated [ 1881.754827] cfg80211: Calling CRDA for country: US [ 1881.761552] cfg80211: Regulatory domain changed to country: US [ 1881.761559] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1881.761564] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1881.761568] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1881.761571] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761574] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761577] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761580] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1881.761584] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1882.391038] cfg80211: Calling CRDA to update world regulatory domain [ 1882.396254] cfg80211: World regulatory domain updated: [ 1882.396260] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1882.396265] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396268] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396271] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1882.396274] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396277] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.148252] wlan1: authenticate with --------------- [ 1886.150005] wlan1: send auth to --------------- (try 1/3) [ 1886.151807] wlan1: authenticated [ 1886.154847] wlan1: associate with --------------- (try 1/3) [ 1886.158147] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1886.163464] wlan1: associated [ 1886.163520] wlan1: Limiting TX power to 30 (30 - 0) dBm as advertised by --------------- [ 1886.163588] cfg80211: Calling CRDA for country: US [ 1886.170500] cfg80211: Regulatory domain changed to country: US [ 1886.170508] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1886.170513] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1886.170517] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1886.170520] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170523] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170526] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170529] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1886.170533] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1887.200197] cfg80211: Calling CRDA to update world regulatory domain [ 1887.203655] cfg80211: World regulatory domain updated: [ 1887.203659] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1887.203662] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203664] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203666] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1887.203668] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203670] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) I've poked around on AskUbuntu, and have not found any adequate solutions; have also found similar threads that were left unanswered. Any advice/experience/threads I might be able to pull on would be greatly appreciated. In your opinion, is this a kernel issue, hardware issue, etc.? Thanks in advance. EDIT: chili, here's the output of iwconfig: delta@sager:~$ iwconfig wlan1 IEEE 802.11abg ESSID:"LANbeforetime" Mode:Managed Frequency:2.412 GHz Access Point: ----------- Bit Rate=48 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=44/70 Signal level=-66 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:80 Missed beacon:0 eth0 no wireless extensions. lo no wireless extensions.

    Read the article

  • ORE graphics using Remote Desktop Protocol

    - by Sherry LaMonica
    Oracle R Enterprise graphics are returned as raster, or bitmap graphics. Raster images consist of tiny squares of color information referred to as pixels that form points of color to create a complete image. Plots that contain raster images render quickly in R and create small, high-quality exported image files in a wide variety of formats. However, it is a known issue that the rendering of raster images can be problematic when creating graphics using a Remote Desktop connection. Raster images do not display in the windows device using Remote Desktop under the default settings. This happens because Remote Desktop restricts the number of colors when connecting to a Windows machine to 16 bits per pixel, and interpolating raster graphics requires many colors, at least 32 bits per pixel.. For example, this simple embedded R image plot will be returned in a raster-based format using a standalone Windows machine:  R> library(ORE) R> ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", all=TRUE)  R> ore.doEval(function() image(volcano, col=terrain.colors(30))) Here, we first load the ORE packages and connect to the database instance using database login credentials. The ore.doEval function executes the R code within the database embedded R engine and returns the image back to the client R session. Over a Remote Desktop connection under the default settings, this graph will appear blank due to the restricted number of colors. Users who encounter this issue have two options to display ORE graphics over Remote Desktop: either raise Remote Desktop's Color Depth or direct the plot output to an alternate device. Option #1: Raise Remote Desktop Color Depth setting In a Remote Desktop session, all environment variables, including display variables determining Color Depth, are determined by the RCP-Tcp connection settings. For example, users can reduce the Color Depth when connecting over a slow connection. The different settings are 15 bits, 16 bits, 24 bits, or 32 bits per pixel. To raise the Remote Desktop color depth: On the Windows server, launch Remote Desktop Session Host Configuration from the Accessories menu.Under Connections, right click on RDP-Tcp and select Properties.On the Client Settings tab either uncheck LimitMaximum Color Depth or set it to 32 bits per pixel. Click Apply, then OK, log out of the remote session and reconnect.After reconnecting, the Color Depth on the Display tab will be set to 32 bits per pixel.  Raster graphics will now display as expected. For ORE users, the increased color depth results in slightly reduced performance during plot creation, but the graph will be created instead of displaying an empty plot. Option #2: Direct plot output to alternate device Plotting to a non-windows device is a good option if it's not possible to increase Remote Desktop Color Depth, or if performance is degraded when creating the graph. Several device drivers are available for off-screen graphics in R, such as postscript, pdf, and png. On-screen devices include windows, X11 and Cairo. Here we output to the Cairo device to render an on-screen raster graphic.  The grid.raster function in the grid package is analogous to other grid graphical primitives - it draws a raster image within the current plot's grid.  R> options(device = "CairoWin") # use Cairo device for plotting during the session R> library(Cairo) # load Cairo, grid and png libraries  R> library(grid) R> library(png)  R> res <- ore.doEval(function()image(volcano,col=terrain.colors(30))) # create embedded R plot  R> img <- ore.pull(res, graphics = TRUE)$img[[1]] # extract image  R> grid.raster(as.raster(readPNG(img)), interpolate = FALSE) # generate raster graph R> dev.off() # turn off first device   By default, the interpolate argument to grid.raster is TRUE, which means that what is actually drawn by R is a linear interpolation of the pixels in the original image. Setting interpolate to FALSE uses a sample from the pixels in the original image.A list of graphics devices available in R can be found in the Devices help file from the grDevices package: R> help(Devices)

    Read the article

  • trying to setup wireless

    - by JohnMerlino
    I'm trying to set up wireless on vostro 1520 dell laptop, with latest Ubuntu install. Here's the output of some of the commands that I was told to run: lshw -C network viggy@ubuntu:~$ lshw -C network WARNING: you should run this program as super-user. *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:08:00.0 logical name: eth0 version: 03 serial: 00:24:e8:da:84:25 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168d-1.fw ip=192.168.2.6 latency=0 multicast=yes port=MII speed=100Mbit/s resources: irq:47 ioport:3000(size=256) memory:f6004000-f6004fff memory:f6000000-f6003fff memory:f6020000-f603ffff *-network description: Network controller product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:0e:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:18 memory:fa000000-fa003fff *-network DISABLED description: Wireless interface physical id: 1 logical name: wlan0 serial: 0c:60:76:05:ee:74 capabilities: ethernet physical wireless configuration: broadcast=yes driver=b43 driverversion=3.2.0-29-generic firmware=N/A multicast=yes wireless=IEEE 802.11bg lspci 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1c.2 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 3 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 03) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 0e:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) 1a:00.0 FireWire (IEEE 1394): O2 Micro, Inc. Device 10f7 (rev 01) 1a:00.1 SD Host controller: O2 Micro, Inc. Device 8120 (rev 01) 1a:00.2 Mass storage controller: O2 Micro, Inc. Device 8130 (rev 01) iwconfig lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on eth0 no wireless extensions. At this point in time, I don't have wireless.

    Read the article

  • How do I get 5.1 surround sound working on an Acer Aspire 5738ZG?

    - by kbargais_LV
    I got a problem with sound. I tried everything but no results. :( I got 3 sound ports. my daemon: # This file is part of PulseAudio. # # PulseAudio is free software; you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # PulseAudio is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with PulseAudio; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA. ## Configuration file for the PulseAudio daemon. See pulse-daemon.conf(5) for ## more information. Default values are commented out. Use either ; or # for ## commenting. ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; local-server-type = user ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = /etc/pulse/default.pa ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10 ; enable-deferred-volume = yes ; deferred-volume-safety-margin-usec = 8000 ; deferred-volume-extra-delay-usec = 0

    Read the article

  • MySQL for Excel new features (1.2.0): Save and restore Edit sessions

    - by Javier Rivera
    Today we are going to talk about another new feature included in the latest MySQL for Excel release to date (1.2.0) which can be Installed directly from our MySQL Installer downloads page.Since the first release you were allowed to open a session to directly edit data from a MySQL table at Excel on a worksheet and see those changes reflected immediately on the database. You were also capable of opening multiple sessions to work with different tables at the same time (when they belong to the same schema). The problem was that if for any reason you were forced to close Excel or the Workbook you were working on, you had no way to save the state of those open sessions and to continue where you left off you needed to reopen them one by one. Well, that's no longer a problem since we are now introducing a new feature to save and restore active Edit sessions. All you need to do is in click the options button from the main MySQL for Excel panel:  And make sure the Edit Session Options (highlighted in yellow) are set correctly, specially that Restore saved Edit sessions is checked: Then just begin an Edit session like you would normally do, select the connection and schema on the main panel and then select table you want to edit data from and click over Edit MySQL Data. and just import the MySQL data into Excel:You can edit data like you always did with the previous version. To test the save and restore saved sessions functionality, first we need to save the workbook while at least one Edit session is opened and close the file.Then reopen the workbook. Depending on your version of Excel is where the next steps are going to differ:Excel 2013 extra step (first): In Excel 2013 you first need to open the workbook with saved edit sessions, then click the MySQL for Excel Icon on the the Data menu (notice how in this version, every time you open or create a new file the MySQL for Excel panel is closed in the new window). Please note that if you work on Excel 2013 with several workbooks with open edit sessions each at the same time, you'll need to repeat this step each time you open one of them: Following steps:  In Excel 2010 or previous, you just need to make sure the MySQL for Excel panel is already open at this point, if its not, please do the previous step specified above (Excel 2013 extra step). For Excel 2010 or older versions you will only need to do this previous step once.  When saved sessions are detected, you will be prompted what to do with those sessions, you can click Restore to continue working where you left off, click Discard to delete the saved sessions (All edit session information for this file will be deleted from your computer, so you will no longer be prompted the next time you open this same file) or click Nothing to continue without opening saved sessions (This will keep the saved edit sessions intact, to be prompted again about them the next time you open this workbook): And there you have it, now you will be able to save your Edit sessions, close your workbook or turn off your computer and you will still be able to reopen them in the future, to continue working right where you were. Today we talked about how you can save your active Edit sessions and restore them later, this is another feature included in the latest MySQL for Excel release (1.2.0). Please remember you can try this product and many others for free downloading the installer directly from our MySQL Installer downloads page.Happy editing !

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Obfuscation is not a panacea

    - by simonc
    So, you want to obfuscate your .NET application. My question to you is: Why? What are your aims when your obfuscate your application? To protect your IP & algorithms? Prevent crackers from breaking your licensing? Your boss says you need to? To give you a warm fuzzy feeling inside? Obfuscating code correctly can be tricky, it can break your app if applied incorrectly, it can cause problems down the line. Let me be clear - there are some very good reasons why you would want to obfuscate your .NET application. However, you shouldn't be obfuscating for the sake of obfuscating. Security through Obfuscation? Once your application has been installed on a user’s computer, you no longer control it. If they do not want to pay for your application, then nothing can stop them from cracking it, even if the time cost to them is much greater than the cost of actually paying for it. Some people will not pay for software, even if it takes them a month to crack a $30 app. And once it is cracked, there is nothing stopping them from putting the result up on the internet. There should be nothing suprising about this; there is no software protection available for general-purpose computers that cannot be cracked by a sufficiently determined attacker. Only by completely controlling the entire stack – software, hardware, and the internet connection, can you have even a chance to be uncrackable. And even then, someone somewhere will still have a go, and probably succeed. Even high-end cryptoprocessors have known vulnerabilities that can be exploited by someone with a scanning electron microscope and lots of free time. So, then, why use obfuscation? Well, the primary reason is to protect your IP. What obfuscation is very good at is hiding the overall structure of your program, so that it’s very hard to figure out what exactly the code is doing at any one time, what context it is running in, and how it fits in with the rest of the application; all of which you need to do to understand how the application operates. This is completely different to cracking an application, where you simply have to find a single toggle that determines whether the application is licensed or not, and flip it without the rest of the application noticing. However, again, there are limitations. An obfuscated application still has to run in the same way, and do the same thing, as the original unobfuscated application. This means that some of the protections applied to the obfuscated assembly have to be undone at runtime, else it would not run on the CLR and do the same thing. And, again, since we don’t control the environment the application is run on, there is nothing stopping a user from undoing those protections manually, and reversing some of the obfuscation. It’s a perpetual arms race, and it always will be. We have plenty of ideas lined about new protections, and the new protections added in SA 6.6 (method parent obfuscation and a new control flow obfuscation level) are specifically designed to be harder to reverse and reconstruct the original structure. So then, by all means, obfuscate your application if you want to protect the algorithms and what the application does. That’s what SmartAssembly is designed to do. But make sure you are clear what a .NET obfuscator can and cannot protect you against, and don’t expect your obfuscated application to be uncrackable. Someone, somewhere, will crack your application if they want to and they don’t have anything better to do with their time. The best we can do is dissuade the casual crackers and make it much more difficult for the serious ones. Cross posted from Simple Talk.

    Read the article

  • Upgrade to Xubuntu 13.10 - Saucy Salamander

    As a common 'fashion' it is possible to upgrade an existing installation of Ubuntu or one of its derivates every six months. Of course, you might opt-in for the adventure and directly keep your system always on the latest version (including alphas and betas), or you might like to play safe and stay on the long-term support (LTS) versions which are updated every two years only. As for me, I'd like to jump from release to release on my main desktop machine. And since 17th October Saucy Salamander or also known as Ubuntu 13.10 has been released for general use. The following paragraphs document the steps I went in order to upgrade my system to the recent version. Don't worry about the fact that I'm actually using Xubuntu. It's mainly a flavoured version of Ubuntu running Xfce 4.10 as default X Window manager. Well, I have Gnome and LXDE on the same system... just out of couriosity. Preparing the system Before you think about upgrading you have to ensure that your current system is running on the latest packages. This can be done easily via a terminal like so: $ sudo apt-get update && sudo apt-get -y dist-upgrade --fix-missing Next, we are going to initiate the upgrade itself: $ sudo update-manager As a result the graphical Software Updater should inform you that a newer version of Ubuntu is available for installation. Ubuntu's Software Updater informs you whether an upgrade is available Running the upgrade After clicking 'Upgrade...' you will be presented with information about the new version. Details about Ubuntu 13.10 (Saucy Salamander) Simply continue with the procedure and your system will be analysed for the next steps. Analysing the existing system and preparing the actual upgrade to 13.10 Next, we are at the point of no return. Last confirmation dialog before having a coffee break while your machine is occupied to download the necessary packages. Not the best bandwidth at hand after all... yours might be faster. Are you really sure that you want to start the upgrade? Let's go and have fun! Anyway, bye bye Raring Ringtail and Welcome Saucy Salamander! In case that you added any additional repositories like Medibuntu or PPAs you will be informed that they are going to be disabled during the upgrade and they might require some manual intervention after completion. Ubuntu is playing safe and third party repositories are disabled during the upgrade Well, depending on your internet bandwidth this might take something between a couple of minutes and some hours to download all the packages and then trigger the actual installation process. In my case I left my PC unattended during the night. Time to reboot Finally, it's time to restart your system and see what's going to happen... In my case absolutely nothing unexpected. The system booted the new kernel 3.11.0 as usual and I was greeted by a new login screen. Honestly, 'same' system as before - which is good and I love that fact of consistency - and I can continue to work productively. And also Software Updater confirms that we just had a painless upgrade: System is running Ubuntu 13.10 - Saucy Salamander - and up to date See you in six months again... ;-) Post-scriptum In case that you would to upgrade to the latest development version of Ubuntu, run the following command in a console: $ sudo update-manager -d And repeat all steps as described above.

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • iphone - direct link to iPhone review form from inside iphone

    - by Mike
    I am trying to link directly to the review link of one of my Apps. I know that it is possible because Appirater did it in the past, but some change in iTunes turned the API down. Appirater uses this URL NSString *templateReviewURL = @"itms-apps://itunes.apple.com/WebObjects/MZStore.woa/wa/viewContentsUserReviews?id=APP_ID&onlyLatestVersion=true&pageNumber=0&sortOrdering=1&type=Purple+Software"; where APP_ID is the ID of an application. running this from inside the APP gives me the message Cannot Connect to iTunes Store. This Page talks about another kind of link https://userpub.itunes.apple.com/WebObjects/MZUserPublishing.woa/wa/addUserReview?id=APP_ID&type=Purple+Software and also itms-apps://ax.itunes.apple.com/WebObjects/MZStore.woa/wa/viewContentsUserReviews?type=Purple+Software&id=APP_ID The first one works, but just from the desktop mac. The second gives me the same error as the first... Cannot Connect to iTunes Store. iTunes link maker is not helping too, because it has no tools for iPad links... Do you guys know how to link to an app's review form from inside an app? In case you don't know, what kind of package should I use to dig this? a package sniffer? thanks for any help.

    Read the article

  • Problem loading Oracle client libraries when running in a NAnt build

    - by Chris Farmer
    I am trying to use dbdeploy to manage Oracle schema changes. I can run it successfully from the command line to get it to generate my change scripts, but when I try to execute it via the dbdeploy NAnt task running through TeamCity, I get an error: System.Data.OracleClient requires Oracle client software version 8.1.7 or greater. I do have the Oracle 10.2.0.2 client software installed. It's the first entry in the system path, and the dbdeploy.exe app is able to successfully negotiate an Oracle connection. The dbdeploy code dynamically loads the System.Data.OracleClient assembly, which in-turn tries to use the Oracle client bits to talk to the database. This is what is failing in my NAnt environment. I have verified the following points: The same user identity is running the process in both cases The same working directory is used in both cases The same dbdeploy code is running in both cases and with the same supplied parameters The same database connection string is being used in both cases The same ADO.NET assembly is being dynamically loaded in both cases (System.Data.OracleClient, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) Here's the top of the stack trace during the error: at System.Data.OracleClient.OCI.DetermineClientVersion() at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction (String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor( OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection( DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection( DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection( DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection( DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection( DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at Net.Sf.Dbdeploy.Database.DatabaseSchemaVersionManager. GetCurrentVersionFromDb() My main question is this: how can I discover what's different about these running environments to see why my Oracle client software can't be loaded?

    Read the article

  • How can I install the BlackBerry v5.0.0 component pack into Eclipse?

    - by Dan J
    I'm trying to install the latest v5.0.0 "beta 2" BlackBerry OS Component Pack into Eclipse 3.4.2 with BlackBerry Eclipse plugin v1.0.0.67, but have hit a few problems. Has anybody found an easy way to do this? I had no trouble installing the v4.5.0 and v4.7.0 Component Packs. It's rather strange that BlackBerry are shipping new phones with the v5.0.0 OS installed (e.g. a Storm 2 9550 and Bold 9700 that I just bought), and pushing that update to phones whilst the BlackBerry website still considers the v5.0.0 SDK / Component Packs to be "beta 2"! If anybody knows when an official non-beta Component Pack is going to be released that might solve my problem... In case it helps, the problems I've hit so far are: -Contrary to the implication on the BlackBerry website, the Eclipse "Software Update..." option for the v5.0.0 Component Pack claims it only works on the v1.0.0 Eclipse BlackBerry plugin, not the new v1.1 one. -I then tried to install the v5.0.0 Component Pack through the "Software Updates..." menu in Eclipse using the v1.0.0 Eclipse BlackBery plugin. Once I'd done the 200MB download the install failed with a "Invalid zip file format" error. -I might just have been unlucky with a corrupted download but I did try it twice, once through "Software Updates..." and once by selecting "Archive" to install the downloaded Component Pack (which unlike v4.5.0 and v4.7.0 was a JAR, not a ZIP).

    Read the article

  • Eclipse: Cannot install CDT because of a conflicting dependency

    - by MarkZar
    my eclipse has been configured for Java and pydev, now i want to configure C/C++ development tools with Eclipse. i dont want to download the whole Eclipse IDE for C/C++ Developers, for it is not convenient. so i decided to install CDT in my Eclipse. Help == Install New Software, then input http://download.eclipse.org/tools/cdt/releases/indigo, waited for a while, and chose the following CDT Main Features, CDT Optional Features, and Next, then an error occurred. Cannot complete the install because of a conflicting dependency. Software being installed: C/C++ DSF GDB Debugger Integration 4.0.0.201106081058 (org.eclipse.cdt.gnu.dsf.feature.group 4.0.0.201106081058) Software being installed: C/C++ Development Tools SDK 8.0.2.201202111925 (org.eclipse.cdt.sdk.feature.group 8.0.2.201202111925) Only one of the following can be installed at once: GDB DSF Debugger Integration Core 4.0.0.201106081058 (org.eclipse.cdt.dsf.gdb 4.0.0.201106081058) GDB DSF Debugger Integration Core 4.0.2.201202111925 (org.eclipse.cdt.dsf.gdb 4.0.2.201202111925) GDB DSF Debugger Integration Core 4.0.1.201109151620 (org.eclipse.cdt.dsf.gdb 4.0.1.201109151620) Cannot satisfy dependency: From: C/C++ Development Tools 8.0.2.201202111925 (org.eclipse.cdt.feature.group 8.0.2.201202111925) To: org.eclipse.cdt.gnu.dsf.feature.group [4.0.1.201202111925] Cannot satisfy dependency: From: C/C++ DSF GDB Debugger Integration 4.0.0.201106081058 (org.eclipse.cdt.gnu.dsf.feature.group 4.0.0.201106081058) To: org.eclipse.cdt.dsf.gdb [4.0.0.201106081058] Cannot satisfy dependency: From: C/C++ DSF GDB Debugger Integration 4.0.1.201202111925 (org.eclipse.cdt.gnu.dsf.feature.group 4.0.1.201202111925) To: org.eclipse.cdt.dsf.gdb [4.0.2.201202111925] Cannot satisfy dependency: From: C/C++ Development Tools SDK 8.0.2.201202111925 (org.eclipse.cdt.sdk.feature.group 8.0.2.201202111925) To: org.eclipse.cdt.feature.group [8.0.2.201202111925] i have googled for a lot of time, but still cannot find a valid solution. can anyone give a hand to me? thanks a lot!

    Read the article

  • How do I break down an NSTimeInterval into year, months, days, hours, minutes and seconds on iPhone?

    - by willc2
    I have a time interval that spans years and I want all the time components from year down to seconds. My first thought is to integer divide the time interval by seconds in a year, subtract that from a running total of seconds, divide that by seconds in a month, subtract that from the running total and so on. That just seems convoluted and I've read that whenever you are doing something that looks convoluted, there is probably a built-in method. Is there? I integrated Alex's 2nd method into my code. It's in a method called by a UIDatePicker in my interface. NSDate *now = [NSDate date]; NSDate *then = self.datePicker.date; NSTimeInterval howLong = [now timeIntervalSinceDate:then]; NSDate *date = [NSDate dateWithTimeIntervalSince1970:howLong]; NSString *dateStr = [date description]; const char *dateStrPtr = [dateStr UTF8String]; int year, month, day, hour, minute, sec; sscanf(dateStrPtr, "%d-%d-%d %d:%d:%d", &year, &month, &day, &hour, &minute, &sec); year -= 1970; NSLog(@"%d years\n%d months\n%d days\n%d hours\n%d minutes\n%d seconds", year, month, day, hour, minute, sec); When I set the date picker to a date 1 year and 1 day in the past, I get: 1 years 1 months 1 days 16 hours 0 minutes 20 seconds which is 1 month and 16 hours off. If I set the date picker to 1 day in the past, I am off by the same amount. Update: I have an app that calculates your age in years, given your birthday (set from a UIDatePicker), yet it was often off. This proves there was an inaccuracy, but I can't figure out where it comes from, can you?

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

  • Pointing non-www to a spcific sub-directory

    - by Ben Sinclair
    I might be going about this all wrong so let me know if I am. I am creating software that allows people to sign up and have their own sub-domain on my website. So say my website is ben.com, they could have their own sub-domain called juice.ben.com. When they type their sub-domain juice.ben.com in their address bar, it will load the contents in a root directory. I have also set-up a .htaccess redirect to redirect www.ben.com to ben.com. Not sure if this matters with my question but I thought I'd mention it. Ok, so basically what I think I need to do is put the software they they've signed up to in the root directory. So when someone goes to juice.ben.com, they will be pointed to the root directory (I beleive I cans et-up wild card sub-domains with my host) and the software will then analyse their sub-domain and then display their account. Now, if someone just types in ben.com into their browser, I want it to show the contents of the ben.com/_website/ folder but still show in the address bar that they are still in the root directory. Hopefully I am making sense :) Is this possible with htaccess? If so, what do I need to do?

    Read the article

  • IE innerHTML chops sentence if the last word contains '&' (ampersand)

    - by Mandai
    I am trying to populate a DOM element with ID 'myElement'. The content which I'm populating is a mix of text and HTML elements. Assume following is the content I wish to populate in my DOM element. var x = "<b>Success</b> is a matter of hard work &luck"; I tried using innerHTML as follows, document.getElementById("myElement").innerHTML=x; This resulted in chopping off of the last word in my sentence. Apparently, the problem is due to the '&' character present in the last word. I played around with the '&' and innerHTML and following are my observations. If the last word of the content is less than 10 characters and if it has a '&' character present in it, innerHTML chops off the sentence at '&'. This problem does not happen in firefox. If I use innerText the last word is in tact but then all the HTML tags which are part of the content becomes plain text. I tried populating through jQuery's #html method, $("#myElement").html(x); This approach solves the problem in IE but not in chrome. How can I insert a HTML content with a last word containing '&' without it being chopped off in all browsers?

    Read the article

  • SubSonic 3 ignoring columns in Select()

    - by jessegavin
    I have a table like so.. CREATE TABLE [dbo].[Locations_Hours]( [LocationID] [int] NOT NULL, [sun_open] [nvarchar](10) NULL, [sun_close] [nvarchar](10) NULL, [mon_open] [nvarchar](10) NULL, [mon_close] [nvarchar](10) NULL, [tue_open] [nvarchar](10) NULL, [tue_close] [nvarchar](10) NULL, [wed_open] [nvarchar](10) NULL, [wed_close] [nvarchar](10) NULL, [thu_open] [nvarchar](10) NULL, [thu_close] [nvarchar](10) NULL, [fri_open] [nvarchar](10) NULL, [fri_close] [nvarchar](10) NULL, [sat_open] [nvarchar](10) NULL, [sat_close] [nvarchar](10) NULL, [StoreNumber] [int] NULL, [LocationHourID] [int] IDENTITY(1,1) NOT NULL, CONSTRAINT [PK_Locations_Hours] PRIMARY KEY CLUSTERED ( [LocationHourID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] And SubSonic 3 is generating a class with the following properties int LocationID string monopen string monclose string tueopen string tueclose string wedopen string wedclose string thuopen string thuclose string friopen string friclose string satopen string satclose string sunopen string sunclose int? StoreNumber int LocationHourID When I try to perform a query against this class like so.. var result = DB.LocationHours.Where(o => o.LocationID == _locationId); This is the resulting SQL query that SubSonic generates. SELECT [t0].[LocationHourID], [t0].[LocationID], [t0].[StoreNumber] FROM [dbo].[Locations_Hours] AS t0 WHERE ([t0].[LocationID] = 4019) I cannot figure out why SubSonic is omitting the nvarchar fields when it generates the SELECT statement. Anyone got any ideas?

    Read the article

  • Scrapy issue with iTunes' AppStore

    - by Eric
    I am using Scrapy to fetch some data from iTunes' AppStore database. I start with this list of apps: http://itunes.apple.com/us/genre/mobile-software-applications/id36?mt=8 In the following code I have used the simplest regex which targets all apps in the US store. from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.contrib.spiders import CrawlSpider, Rule class AppStoreSpider(CrawlSpider): domain_name = 'itunes.apple.com' start_urls = ['http://itunes.apple.com/us/genre/mobile-software-applications/id6015?mt=8'] rules = ( Rule(SgmlLinkExtractor(allow='itunes\.apple\.com/us/app'), 'parse_app', follow=True, ), ) def parse_app(self, response): .... SPIDER = AppStoreSpider() When I run it I receive the following: [itunes.apple.com] DEBUG: Crawled (200) <GET http://itunes.apple.com/us/genre/mobile-software-applications/id6015?mt=8> (referer: None) [itunes.apple.com] DEBUG: Filtered offsite request to 'itunes.apple.com': <GET http://itunes.apple.com/us/app/bloomberg/id281941097?mt=8> As you can see, when it starts crawling the first page it says: "Filtered offsite request to 'itunes.apple.com'". and then the spider stops.. it also returns this message: [ScrapyHTTPPageGetter,client] /usr/lib/python2.5/cookielib.py:1577: exceptions.UserWarning: cookielib bug! Traceback (most recent call last): File "/usr/lib/python2.5/cookielib.py", line 1575, in make_cookies parse_ns_headers(ns_hdrs), request) File "/usr/lib/python2.5/cookielib.py", line 1532, in _cookies_from_attrs_set cookie = self._cookie_from_cookie_tuple(tup, request) File "/usr/lib/python2.5/cookielib.py", line 1451, in _cookie_from_cookie_tuple if version is not None: version = int(version) ValueError: invalid literal for int() with base 10: '"1"' I have used the same script for other website and I didn't have this problem. Any suggestion?

    Read the article

  • Error while dynamically loading mapi32.dll

    - by The_Fox
    Our application uses Simple MAPI to send e-mails. One of our clients has problems sending e-mail from a session on his terminal server. The mapi32.dll is loaded with a call to LoadLibrary which succeeds, but then our application tries to get the addresses of the functions MAPILogon, MAPILogOff, MAPISendMail, MAPIFreeBuffer and MAPIResolveName. The problem is that GetProcAddress fails for those functions with an ERROR_ACCESS_DENIED (code: 5) except for MAPIFreeBuffer. It looks like some sort of security thing. How can I fix this or should I use another method to send mail? FWI, here some more information about OS and contents of registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Messaging Subsystem: OS info: 5.2.3790 VER_PLATFORM_WIN32_NT Service Pack 2 Contents of SOFTWARE\Microsoft\Windows Messaging Subsystem InstallCmd: rundll32 setupapi,InstallHinfSection MSMAIL 132 msmail.inf MAPI: 1 CMCDLLNAME: mapi.dll CMCDLLNAME32: mapi32.dll CMC: 1 MAPIX: 1 MAPIXVER: 1.0.0.1 OLEMessaging: 1 Contents of SOFTWARE\Microsoft\Windows Messaging Subsystem\MSMapiApps inetsw95.exe: choosusr.dll: msab32.dll: nwab32.dll: outstore.dll: Microsoft Outlook CDOEXM.DLL: EMSMDB32.DLL: EMSABP32.DLL: newprof.exe: Microsoft Outlook outlook.exe: wfxmsrvr.exe: Microsoft Outlook msexcimc.exe: exchng32.exe: schdmapi.dll: Microsoft Outlook pilotcfg.exe: Microsoft Outlook mailmig.exe: Microsoft Outlook admin.exe: msspc32.dll: Microsoft Outlook cnfnot32.exe: Microsoft Outlook ilpilot.exe: Microsoft Outlook events.exe: I'm on Delphi 7.0, but that shouldn't matter. Edit, added version information: Fileversion info of C:\WINDOWS\system32\mapi32.dll Fileversion: 6.5.7226.0 FileDescription=Extended MAPI 1.0 for Windows NT CompanyName=Microsoft Corporation InternalName=MAPI32 Comments=Service Pack 1 LegalCopyRight=Copyright (C) 1986-2003 Microsoft Corp. All rights reserved. LegalTradeMarks=Microsoft(R) and Windows(R) are registered trademarks of Microsoft Corporation. OriginalFileName=MAPI32.DLL ProductName=Microsoft Exchange ProductVersion=6.5 Fileversion info of C:\Program Files\Common Files\SYSTEM\MSMAPI\1043\msmapi32.dll Fileversion: 11.0.5601.0 FileDescription=Extended MAPI 1.0 for Windows NT CompanyName=Microsoft Corporation InternalName=MAPI32.DLL LegalCopyRight=Copyright © 1995-2003 Microsoft Corporation. All rights reserved. OriginalFileName=MAPI32.DLL ProductName=MAPI32 ProductVersion=11.0.5601

    Read the article

  • UiModeManager - NightMode (FroYo)

    - by Kaloer
    Hi there, I have been trying to turn off the buttons' light in my application using the UiModeManager's nightmode function. The default Desk Clock application (Nexus One) turns off the buttons' light when it is dimmed, and I want to do this as well. I've tried using the following code: UiModeManager mgr = (UiModeManager) getSystemService(UI_MODE_SERVICE); mgr.setNightMode(UiModeManager.MODE_NIGHT_YES); The UiModeManager.setNightMode(int mode) documentation says this: Sets the night mode. Changes to the night mode are only effective when the car or desk mode is enabled on a device. Does that mean that the device has to be physically in a desk dock? I can set the device to car mode using the UiModeManager.enableCarMode(int flags) method. This works fine, but it doesn't turn off the lights, it only dims the screen's backlight. Is there a way to set the device into desk mode without using a physical desk dock? As the FroYo source code is not yet released, I cannot look at the build-in Desk Clock application. Thanks in advantage.

    Read the article

  • Text mining on large database (data mining)

    - by yox
    Hello, I have a large database of resumes (CV), and a certain table skills grouping all users skills. inside that table there's a field skill_text that describes the skill in full text. I'm looking for an algorithm/software/method to extract significant terms/phrases from that table in order to build a new table with standarized skills.. Here are some examples skills extracted from the DB : Sectoral and competitive analysis Business Development (incl. in international settings) Specific structure and road design software - Microstation, Macao, AutoCAD (basic knowledge) Creative work (Photoshop, In-Design, Illustrator) checking and reporting back on campaign progress organising and attending events and exhibitions Development : Aptana Studio, PHP, HTML, CSS, JavaScript, SQL, AJAX Discipline: One to one marketing, E-marketing (SEO & SEA, display, emailing, affiliate program) Mix marketing, Viral Marketing, Social network marketing. The output shoud be something like : Sectoral and competitive analysis Business Development Specific structure and road design software - Macao AutoCAD Photoshop In-Design Illustrator organising events Development Aptana Studio PHP HTML CSS JavaScript SQL AJAX Mix marketing Viral Marketing Social network marketing emailing SEO One to one marketing As you see only skills remains no other representation text. I know this is possible using text mining technics but how to do it ? the database is realy large.. it's a good thing because we can calculate text frequency and decide if it's a real skill or just meaningless text... The big problem is .. how to determin that "blablabla" is a skill ? thanks

    Read the article

  • Debugging a working program on Mathematica 5 with Mathematica 7

    - by Neuschwanstein
    Hi everybody, I'm currently reading the Mathematica Guidebooks for Programming and I was trying to work out one of the very first program of the book. Basically, when I run the following program: Plot3D[{Re[Exp[1/(x + I y)]]}, {x, -0.02, 0.022}, {y, -0.04, 0.042}, PlotRange -> {-1, 8}, PlotPoints -> 120, Mesh -> False, ColorFunction -> Function[{x1, x2, x3}, Hue[Arg[Exp[1/(x1 + I x2)]]]]] either I get a 1/0 error and e^\infinity error or, if I lower the PlotPoints options to, say, 60, an overflow error. I have a working output though, but it's not what it's supposed to be. The hue seems to be diffusing off the left corner whereas it should be diffusing of the origin (as can be seen on the original output) Here is the original program which apparently runs on Mathematica 5 (Trott, Mathematica Guidebook for Programming): Off[Plot3D::gval]; Plot3D[{Re[Exp[1/(x + I y)]], Hue[Arg[Exp[1/(x + I y)]]]}, {x, -0.02, 0.022}, {y, -0.04, 0.042}, PlotRange -> {-1, 8}, PlotPoints -> 120, Mesh -> False] Off[Plot3D::gval]; However, ColorFunction used this way (first Plot3D argument) doesn't work and so I tried to simply adapt to its new way of using it. Well, thanks I guess!

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >