Search Results

Search found 27870 results on 1115 pages for 'standard output'.

Page 657/1115 | < Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >

  • Update Since Microsoft/PSC Office Open XML Case Study

    - by Tim Murphy
    In 2009 Microsoft released a case study about a project that we had done using the OOXML SDK 1.0 for Research Directors Inc.  Since that time Microsoft has released version 2.0 of the SDK and PSC has done significant development with it.  Below are some of the mile stones we have reached since the original case study. At the time of the original case study two report types had been automated to output as PowerPoint presentations.  Now that the all the main products have been delivered we have added three reports with Word document outputs and five more reports with PowerPoint outputs. One improvement we made over the original application was to create a PowerPoint Add-In which allows the users to tag a slide.  These tags along with the strongly typed SDK 2.0 allows for the code to use LINQ to easily search for slides in the template files.  This allows for a more flexible architecture base on assembling a presentation from copied slide extracted from the template. The new library we created also enabled us to create two new Word based reports in two weeks.  The library we created abstracts the generation of the documents from the business logic and the data retrieval.  The key to this is the mark up.  Content Controls are a good method for identifying sections of a template to be modified or replaced.  Join this with the concept of all data being generically either scalar or two dimensional and the code becomes more generic. In the end we found the OOXML SDK 2.0 to be a great tool for accelerating document generation development and creating happy clients.  del.icio.us Tags: PSC Group,OOXML,Case Study,Office Open XML,Word,PowerPoint

    Read the article

  • Modern monitor technologies - need to find a new monitor

    - by Michal Minicki
    I'm preparing to change my old LCD monitor for a new one. I have an old NEC 20WGX2 Pro based on an IPS panel. I'm looking for a screen that gives good color output but is very good at gaming at the same time (since it is its primary service). I tend to switch monitors between my different computers at home so it has to be multi purpose, hence IPS technology before. Now, where can I read on newest monitor technologies so I can make an informed decision? I need to find a best fit for myself and I have a very outdated knowledge at the moment. So any hints are greatly appreciated, be it info on technologies, web sources, links to other questions, etc.

    Read the article

  • I am trying to set up a ubuntu sever 12.04 on my machine

    - by Jseb
    I am trying to set up a server on my home network which will eventually host rails. I am not great in linux server and i try to follow the prompt. I did succesfully get to a black screen which then prompts me to a username then password to then do anything ( assuming). However here what i try to do I kinda fellow his tutorial http://www.ubuntugeek.com/step-by-step-ubuntu-11-04-natty-lamp-server-setup.html but however the command where not 100% like him not in same order but same idea. Then i want to install ubuntu server with gui here the command i try with sudo apt-get upgrade sudo apt-get install ubuntu-desktop Which however give me the following error Err http... inRelease w Failed to fetch ht... So been ignored if i try the desktop one i get E: unable to locate package ubuntu E: unable to locate package desktop So i am assuming i am not connected to the internet, so i try the following command sudo vi /etc/network/interfaces here the output it gives me and i know my gateway on my laptop is 192.168.1.1 address: 192.168.1.148 netmask: 255.255.255.0 network: 192.168.1.0 broadcasts: 192.168.1.255 gateway: 192.168.1.1 Btw i do not know the command to get out of vi and saving it. Err http://us.archive.ubuntu.com precises InRelease Err http://us.archive.ubuntu.com precises-updates InRelease Err http://us.archive.ubuntu.com precises-backports InRelease Reading package lists... Done W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/InRelease W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/InRelease W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backport/InRelease

    Read the article

  • Can Apache 2 be configured to start sending gzipped data early?

    - by rikh
    We have Apache set up to gzip compress html pages before they are sent to the client browser. However, some of our pages are slowish to generate and it seems that Apache is holding on until it has the complete page, compressing it, then sending it to the browser. There are big chunks of the page (the main important bits) that are actually generated and output fairly quickly. Is it possible to configure Apache to start compressing and send data for the page as soon as the script starts outputting something? Is it is, can you offer any help is how to do this? If not, can you suggest any other way to get gzip compression working for the server? The scripts that generate the pages are written in PHP. We are using Apache 2.0 on Linux.

    Read the article

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • Matrix Multiplication with C++ AMP

    - by Daniel Moth
    As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now. The code for serial and accelerated Consider a naïve (or brute force) serial implementation of matrix multiplication  0: void MatrixMultiplySerial(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 1: { 2: for (int row = 0; row < M; row++) 3: { 4: for (int col = 0; col < N; col++) 5: { 6: float sum = 0.0f; 7: for(int i = 0; i < W; i++) 8: sum += vA[row * W + i] * vB[i * N + col]; 9: vC[row * N + col] = sum; 10: } 11: } 12: } We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h> 13: void MatrixMultiplySimple(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 14: { 15: concurrency::array_view<const float,2> a(M, W, vA); 16: concurrency::array_view<const float,2> b(W, N, vB); 17: concurrency::array_view<concurrency::writeonly<float>,2> c(M, N, vC); 18: concurrency::parallel_for_each(c.grid, 19: [=](concurrency::index<2> idx) restrict(direct3d) { 20: int row = idx[0]; int col = idx[1]; 21: float sum = 0.0f; 22: for(int i = 0; i < W; i++) 23: sum += a(row, i) * b(i, col); 24: c[idx] = sum; 25: }); 26: } First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has been transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper. Using array_view and extent When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(direct3d) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator. Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one: extent<2> e(M, W); array_view<const float, 2> a(e, vA); In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already? Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, only copy it back. The kernel dispatch On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel. The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a grid) since the array_view object helpfully exposes that as a property. Note that instead of c.grid we could have written grid<2>(c.extent) or grid<2>(extent<2>(M, N)) – the result is the same in that we have specified M*N threads to execute our lambda. The second argument is a restrict(direct3d) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view. The kernel itself The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ). The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner). I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum Targeting a specific accelerator Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18: vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators(); accelerator acc = accs[0]; …and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become: concurrency::parallel_for_each(acc.default_view, c.grid, ...and the rest of your code remains the same… how simple is that? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Reinstall ruby (or just yaml lib)

    - by Christian Sciberras
    I've installed ruby 1.9 from source, and tried installing gem 'bundler': < gem install bundler > /usr/local/lib/ruby/1.9.1/yaml.rb:56:in `<top (required)>': > It seems your ruby installation is missing psych (for YAML output). > To eliminate this warning, please install libyaml and reinstall your ruby. > .... I've not been able to cleanly uninstall ruby (wtf?!), and installing libyaml at this point didn't help either. So it seems I've ended up with a fk-ed up server since I can't rollback nor fix the issue. Of course, I do have backups, but this situation is ridiculous nonetheless. Surely there must be a real fix?

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org/ht/d/sp/i/17770

    Read the article

  • wifi not working hp dv4

    - by Blaze
    i get this output for sudo lshw -C network : *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:08:00.0 logical name: eth0 version: 06 serial: 3c:4a:92:cd:63:98 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=N/A latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:42 ioport:3000(size=256) memory:c0404000-c0404fff memory:c0400000-c0403fff *-network DISABLED description: Wireless interface product: RT5390 [802.11 b/g/n 1T1R G-band PCI Express Single Chip] vendor: Ralink corp. physical id: 0 bus info: pci@0000:0a:00.0 logical name: wlan0 version: 00 serial: 90:00:4e:82:3c:5b width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rt2800pci driverversion=3.0.0-17-generic-pae firmware=0.34 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:c2500000-c250ffff i cant enable the wifi using the fn+f12 key.. its just stuck. In windows 7 it gets enabled when i press the same keys. but this happens after it gets booted up... i just want to use the wifi in anyway possible. can anybody help?

    Read the article

  • Unable to record sound from web browser (firefox / chromium ) using recordmydesktop

    - by thamurath
    I have to do some screencast tutorials and i am using recordmydesktop with gtk frontend to do it. I need to record also the sound and here is where i have found the problem. It took me some time, but now I can record the sound from almost every application in my desktop ... almost. I need to capture some sound from a web application using java, but when i load the page nothing appears in the playback tab of pavucontrol. I think this is the problem, because if there is no sound stream i think the recordmydesktop program thinks there is no sound to record ... the funny thing is that I can ear the sound in my speakers! I have tried with Firefox and Chromium with no success. Although I have been able to record youtube videos without problem, so it seems that java is the key here. Any suggestion or idea? P.S.: I am using Ubuntu 11.10 with this configuration. ( if more information is needed please let me know) sight i cannot post images ... so I have an audigy2 sound card using Analog Stereo Output profile. I have also an "Internal Audio" device, but i have it with the "Off" profile. In recordmydesktop-Advanced-Sound: Device = default

    Read the article

  • Scanned JPEGs are large and slow to load - can they be optimized losslessly?

    - by Alistair Knock
    I have hundreds of JPEG photographs which were scanned about 5 years ago from negative using a Konica Minolta DiMAGE Scan Dual IV. The dimensions are ~4500x3000, and the filesize is around 12Mb, compared to shots from a DSLR with dimensions of 3000x2300 and filesize of 2-4Mb (actually, these are the output from a RAW convertor). The filesize is obviously quite a big difference, but the issue that's bothering me is that the (perceived) loading time is at least 10 times slower. Is this size/speed discrepancy likely to be because the scanner software saved the JPEGs inefficiently / using an old compression format, or is it simply that the scanned negatives contain much more "detail" (in the form of grain/noise) than the digital images? If the former, is there a way to losslessly optimize them? I've tried re-exporting the scanned files to full size JPEG from my RAW software but the filesize is pretty much the same. Both files will have been saved at 100 quality.

    Read the article

  • Logout trouble with alternative shells (Win7 Home Premium x64)

    - by M. Cain
    I've tried bb4win and Litestep, and every time I need to log out for some reason (shutdown and all included), the system hangs and the spinning icon next to the "Logging out" text stops spinning. Changing the shell in registry for both local machine and current user causes this behavior. Any ideas on what might cause this? Alternatvely, what steps can I take to figure out what is going wrong? In Linux, you are able to open things in a framebuffer and examine the output, but with Windows it is not so clear to me how to see what is happening under the hood, especially with this Home Premuim version. Update: Since logging out normally to switch the shell back to Explorer doesn't work normally, I have to end Litestep first. However, when I do so, an Explorer navigation window pops up (not the taskbar). This also occurs with bb4win.

    Read the article

  • What could be causing frequent display freezes?

    - by austen
    I just installed Ubuntu 14.04 two days ago (coming from Win8) and in the two days that I've been using it, my display has frozen four or five times. The mouse won't move but the keyboard does respond so I can use the Ctrl+Alt+Bkspc command to fix it. It seems like it might just be the display freezing because one of the times I was watching a Youtube video and the audio continued playing. I have an Nvidia graphics card with the most recent Nvidia drivers for it enabled. I see that a lot of questions about Ubuntu freezing get marked as a duplicate and pretty much always linked back to a thread about what to do when it freezes. Clearly, I've got that bit figured out already and I did read that thread for further advice. What I'm looking for though is how to fix this permanently. output from lspci -nnk | grep -iA2 VGA: 00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09) Subsystem: Lenovo Device [17aa:2200] Kernel driver in use: i915 Update: JohnnyEnglish pointed out that Ubuntu is using the integrated graphics, not my Nvidia card. It turns out my laptop uses Nvidia Optimus and I cannot enable only the graphics card through the BIOS. I found out about Nvidia Prime and got it set up using this article. The settings panel which allows you to select the graphics says that 'performance mode' is enabled but when I check which graphics controller is enabled through the terminal, it still says it's using the integrated graphics. I'm not sure if this could be causing the freezes but I guess it's a starting point. Any ideas on how to resolve this?

    Read the article

  • Change Linux Console's Default Monitor

    - by Tim M
    Is there any way to specify which monitor the console is displayed on in Linux? Details: I have a 3 monitor setup with 2 video cards. When I boot the computer, the BIOS displays on the PCI graphics card (which has a small monitor). When starting Linux, the console is displayed on the same monitor. Is there a way to have the console output on a different monitor? I'm using the vesafb framebuffer. I don't see a way in my BIOS to change the default video card.

    Read the article

  • Cropping a PDF File's Margin During Printing

    - by JavaMan
    I'm using the free Acrobat Reader to print out some pdf documents having very large top/bottom/left/right margins. I want to remove the margins (which are wasting too much space and making the fonts too small). I used to use Acrobat (the paid version having edit features) to crop the src pdf file manually. But since it is an old version it does not support new pdf format and I don't want to upgrade for such a simple use. Is there any free way to crop/remove unwanted white margins from the printed pdf? I am thinking to print the pdf files to a PDF Printer like the Bullzip PDF Printer and enlarge the output file manually so as to remove any white margin. But there does not seem to be such a feature in Bullzip PDF Printer. Is there any other virtual printer software that can be used for this purpose?

    Read the article

  • What options are there for generating custom text snippets on the Mac?

    - by Moses
    As I am getting more heavily into programming as a job and no longer as a hobby, I am definitely in need of some ways to improve my productivity. One thing that would definitely help in that respect is being able to create customized keyboard shortcuts for text/code snippets. For instance, holding down CMD+L+O+R+E+M will output a paragraph or two of the Lorem ipsum filler text, or CMD+F+U creates a function declaration. What I am ideally looking for is a database where I can store formatted text snippets, bind them to my choice of keystrokes, and then have the text paste whenever I perform the associated keystrokes. Are there any stand-alone applications that can do this for a Mac. Also, are there any text editors / IDEs that have this ability built in?

    Read the article

  • How do I change Clementine's play/pause indicator icons?

    - by MHC
    This is how the Clementine indicator displays play/pause: It's a minor detail, but I feel that the play and pause icons just don't go with the monochrome design of the panel. In order to change them I tried to locate all files associated with clementine, but to no avail. Here's the output: /home/user/.config/Clementine/clementine.db /usr/bin/clementine /usr/share/app-install/desktop/clementine:clementine.desktop /usr/share/app-install/icons/application-x-clementine.png /usr/share/applications/clementine.desktop /usr/share/doc/clementine /usr/share/doc/clementine/README.Debian /usr/share/doc/clementine/changelog.Debian.gz /usr/share/doc/clementine/copyright /usr/share/icons/hicolor/64x64/apps/application-x-clementine.png /usr/share/icons/hicolor/scalable/apps/application-x-clementine.svg /usr/share/icons/ubuntu-mono-dark/apps/24/clementine-panel-grey.png /usr/share/icons/ubuntu-mono-dark/apps/24/clementine-panel.png /usr/share/icons/ubuntu-mono-light/apps/24/clementine-panel-grey.png /usr/share/icons/ubuntu-mono-light/apps/24/clementine-panel.png /usr/share/man/man1/clementine.1.gz /usr/share/menu/clementine /usr/share/pixmaps/clementine-16.xpm /usr/share/pixmaps/clementine.xpm /var/lib/dpkg/info/clementine.list /var/lib/dpkg/info/clementine.md5sums /var/lib/dpkg/info/clementine.postinst /var/lib/dpkg/info/clementine.postrm /var/lib/menu-xdg/applications/menu-xdg/X-Debian-Applications-Sound-clementine.desktop Can anyone tell me where to find these icons and how to change them?

    Read the article

  • SVN: Error validating server certificate for svn hook linux

    - by Dr Casper Black
    Hi, I managed to setup a SVN (over SSL) server and TortoiseSVN client on Win. I made a Post-Commit Hook for test project. The Post-Commit will update the web dir so the App in PHP can be executed with the newest version. It all works when done over shell. The only problem is, when i commit the changes over the client in Win the change is commited but HOOK throws error post-commit hook failed (exit code 1) with output: Error validating server certificate for 'https://SERVER_IP:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! - The certificate hostname does not match. Certificate information: - Hostname: DEVSRVR - Valid: from Fri, 28 Jan 2011 09:22:45 GMT until Sat, 28 Jan 2012 09:22:45 GMT - Issuer: PHP, SS, SS, SRB - Fingerprint: 5f:d0:50:d6:dd:a6:d4:64:a5:ac:3a:4b:7c:7d:33:e3:75:dd:23:9f (R)eject, accept (t)emporarily or accept (p)ermanently? svn: OPTIONS of 'https://SERVER_IP/svn/myproject/trunk': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://SERVER_IP)

    Read the article

  • Asterisk does not recognise DTMF tones from mobile phones

    - by Eugene van der Merwe
    We have an Asterisk 1.8.7.0 (the Elastix derivative) switchboard. Every since a month ago, seemingly out of the blue, the switchboard does not recognise DTMF tones any more from mobile phones. Testing the switchboard using 7777 works. Testing the switchboard from a normal phones works. Testing the switchboard from a mobile phone fails. Looking at the log file I can't see anything. I used 'asterisk -rvvvv' and 'tail -f /var/log/asterisk/full' to see the live output and scan the logs. I guess I don't see anything because it's simply not recognising the DTMF tones. I did brief research and found an old setting for SIP phones, 'rfc2833compensate=yes', and tried adding this to 'sip_general_custom.conf'. After that I did 'core restart when convenient' but that didn't make any difference. Could anyone give me some additional troubleshooting steps?

    Read the article

  • The Case of the Missing Date/Time Stamp: Reporting Services 2008 R2 Snapshots

    - by smisner
    This week I stumbled upon an undocumented “feature” in SQL Server 2008 R2 Reporting Services as I was preparing a demonstration on how to set up and use report snapshots. If you’re familiar with the main changes in this latest release of Reporting Services, you probably already know that Report Manager got a facelift this time around. Although this facelift was generally a good thing, one of the casualties – in my opinion – is the loss of the snapshot label that served two purposes… First, it flagged the report as a snapshot. Second, it let you know when that snapshot was created. As part of my standard operating procedure when demonstrating report snapshots, I point out this label, so I was rather taken aback when I didn’t see it in the demonstration I was preparing. It sort of upset my routine, and I’m rather partial to my routines. I thought perhaps I wasn’t looking in the right place and changed Report Manager from Tile View to Detail View, but no – that label was still missing. In the grand scheme of life, it’s not an earth-shattering change, but you’ll have to look at the Modified Date in Details View to know when the snapshot was run. Or hope that the report developer included a textbox to show the execution time in the report. (Hint: this is a good time to add this to your list of report development best practices, whether a report gets set up as a report snapshot or not!) A snapshot from the past In case you don’t remember how a snapshot appeared in Report Manager back in the old days (of SQL Server 2008 and earlier), here’s an image I snagged from my Reporting Services 2008 Step by Step manuscript: A snapshot in the present A report server running in SharePoint integrated mode had no such label. There you had to rely on the Report Modified date-time stamp to know the snapshot execution time. So I guess all platforms are now consistent. Here’s a screenshot of Report Manager in the 2008 R2 version. One of these is a snapshot and the rest execute on demand. Can you tell which is the snapshot? Consider descriptions as an alternative So my report snapshot demonstration has one less step, and I’ll need to edit the Denali version of the Step by Step book. Things are simpler this way, but I sure wish we had an easier way to identify the execution methods of the reports. Consider using the description field to alert users that the report is a snapshot. It might save you a few questions about why the data isn’t up-to-date if the users know that something changed in the source of the report. Notice that the full description doesn’t display in Tile View, so keep it short and sweet or instruct users to open Details View to see the entire description.

    Read the article

  • Automate RAC Cluster Upgrades using EM12c

    - by HariSrinivasan
    One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.With the release of Database Plug-in  (12.1.0.5.0), EM12c Rel 3 (12.1.0.3.0)  now supports automated upgrading of RAC Clusters in addition to Standalone Databases. This automation includes: Upgrade of the complete Cluster across the nodes. ( Example: 11.1.0.7 CRS, ASM, RAC DB  ->   11.2.0.4 or 12.1.0.1 GI, RAC DB)  Best practices in tune with your operations, where you can automate upgrade in steps: Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs). Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster). Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required. Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to : a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN) b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup c. Ignore backup completely, if there are external mechanisms already in place.  High Level Steps: Select the Procedure "Upgrade Database" from Database Provisioning Home page. Choose the Target Type for upgrade and the Destination version Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already Select the Gold Image of the destination version for deploying both the GI and RAC OHs Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts Set the Break points in the procedure execution to isolate Downtime activities Submit and track the procedure's execution status.  The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link. Explore the functionality!! In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

    Read the article

  • Will HTML5 make Silverlight redundant?

    - by Laila
    One of the great features of Adobe AIR v2 that was launched this month was its support for some of the 2008 draft of HTML5. The HTML5 specification was started in 2004, but the full spec will probably not be approved by W3C until around 2022. One might have thought that it would take years yet from now to reach the point where any browsers were remotely HTML5-compliant, but enough of HTML5 is published and agreed to make a lot of it possible, and Safari and Adobe have got there thanks to Apple's open-source WebKit. The race for HTML 5 has been fuelled by the demand by Apple and Google for advanced graphics, typography, animations and transitions without having to rely on third party browser plug-ins such as Adobe Flash or Silverlight. There is good reason for this haste: Flash doesn't support touch-devices and has been slow in supporting hardware video decoders such as H.264. There is a strong requirement to do all that Flash can do in an open-standards way. Those with proprietary solutions remain sniffy. In AIR 2, Adobe pointedly disables the HTML5 and tags that allow basic playing of media content, saying that the specification is not final and there is still no standard for the supported formats, and adding that Safari implements a 'disjoint set' of codecs. Microsoft also has little interest in HTML 5 as it has so much invested in Silverlight. Google stands to gain by the Adobe AIR for Android as it will allow a lot of applications to be migrated easily to the platform, so sees Apple's war on Flash as a way of gaining market share. Why do we care? It is because HTML5/CSS3 provides facilities much far beyond HTML4, bring the reality of browser-based applications a lot closer. Probably most generally useful is the advanced typography: Safari and AIR already both support a way of reflowing text in a container across an arbitrary number of columns; Page-specific fonts can also be specified. Then there is 2D drawing, video, transitions, local storage, AJAX navigation and mutable DOM prototypes. HTML5 is likely to provide base functionality that is required but it is too early to be certain that it will render Flash, Silverlight or JavaFX obsolete. In the meantime, Adobe Air provides the best vehicle for developing HTML5/CSS3 applications without a twinge of worry about browser incompatibilities. Cheers, Laila

    Read the article

  • Remove Audio stream from XVID files

    - by Kyle Brandt
    I have a bunch of Xvid files that each have an audio stream that I do not want. How can I strip the audio track I don't want using the Linux command line? I don't need the whole script (loop), just what command I would use to process each avi file individually (unless the cmd itself has batch modification built into it). I don't believe the file is in an mkv container, as mkvinfo doesn't find anything. Here is part of mplayer's output (thanks ~quack): [aviheader] Video stream found, -vid 0 ID_AUDIO_ID=1 [aviheader] Audio stream found, -aid 1 ID_AUDIO_ID=2 [aviheader] Audio stream found, -aid 2 VIDEO: [XVID] 512x384 12bpp 25.000 fps 1013.4 kbps (123.7 kbyte/s)

    Read the article

  • compiling php5.4 on macosx 10.6.8

    - by ling
    I'm trying to compile php 5.4.7 on mac osx 10.6.8. I could install it using the default procedure: ./configure \ --prefix=/usr/local \ --with-config-file-path=/usr/local/etc \ --with-apxs2=/usr/local/apache2/bin/apxs \ --with-mysql sudo make clean sudo make sudo make install But now if I try to install to compile php with the curl module it fails: ./configure \ --prefix=/usr/local \ --with-config-file-path=/usr/local/etc \ --with-curl=/usr \ --with-apxs2=/usr/local/apache2/bin/apxs \ --with-mysql sudo make clean sudo make = last lines of make output: Undefined symbols: "_CRYPTO_set_locking_callback", referenced from: _zm_shutdown_curl in interface.o _zm_startup_curl in interface.o "_CRYPTO_num_locks", referenced from: _zm_shutdown_curl in interface.o _zm_startup_curl in interface.o "_CRYPTO_get_id_callback", referenced from: _zm_startup_curl in interface.o "_CRYPTO_set_id_callback", referenced from: _zm_shutdown_curl in interface.o _zm_startup_curl in interface.o ld: symbol(s) not found collect2: ld returned 1 exit status make: * [libs/libphp5.bundle] Error 1 I read somewhere ( http://user.xmission.com/~georgeps/documentation/tutorials/compilation_and_makefiles.html ) that in this case, I should tell the compiler where to find the missing library, so that it can links the missing files. The problem is that I don't what library I should look for, is it libssl2 ?

    Read the article

  • Oracle VM server for SPARC 2.2 on S11

    - by Liam Merwick
    Oracle VM Server for SPARC 2.2 has been released for a little while now. The https://blogs.oracle.com/virtualization blog has an overview of all the 2.2 features. Initially, what was released was the SVR4 package for Solaris 10 (which is unbundled and wasn't constrained by any external schedule). On Solaris 11, the 'ldomsmanager' package is built into Solaris (and therefore doesn't need to be downloaded separately) so it is delivered as part of an S11 Support Repository Update (SRU). Some of the features in 2.2 are specific to S11 (SR-IOV and the ability to live migrate between machines with different CPU types) and so there have been many requests to know when are the S11 bits coming. Solaris 11 SRU8.5 was released on Friday and this includes Oracle VM server for SPARC 2.2 so if you're already running an S11 SRU all you need do is a 'pkg update' to get the 2.2 bits. If you're still running the original S11 and your 'pkg publisher' output shows the /release repository then you'll need to sign up for the /support repo by getting the appropriate keys and certificates to access the repository (requires a support contract). The 2.2 Admin Guide documents how to do this upgrade on S11 Two S11 articles which have some useful details on upgrading (not just 'ldomsmanager') via the support repositories are: How to Update Oracle Solaris 11 Systems From Oracle Support Repositories by Glynn Foster Tips for Updating Your Oracle Solaris 11 System from the Oracle Support Repository by Peter Dennis In particular, if you'd like to stick with the v2.1 release when upgrading to SRU8.5 or greater, see the 'pkg freeze' section of Peter's article.

    Read the article

< Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >