Search Results

Search found 67 results on 3 pages for 'kristian nissen'.

Page 1/3 | 1 2 3  | Next Page >

  • Amavisd start error

    - by Kristian
    I can't start amavis. It gives an error: Starting amavisd: Error in config file "/etc/amavis/conf.d/05-domain_id": Insecure directory in $ENV{PATH} while running with -T switch at /etc/amavis/conf.d/05-domain_id line 7. On line 7 is: chomp($mydomain = `head -n 1 /etc/mailname`); This problem occured after restaring my computer. I don't know much about amavis, so any help is appreciated. Regards, Kristian

    Read the article

  • My headset doesn't work [closed]

    - by Kristian Flatheim Jensen
    Hello! I am not sure if this question fits on this site :S if it doesn't please let me know and i remove it :) well... here is my problem I just got a steel series headset for christmas(yay!) and i quickly plugged it into my MacBook Pro. The audio output works fine, but i am having issues with the audio input :( i had these kinds of problems before and i think my mac may be broken :( but then i saw this post: http://www.biloca.com/blog/?p=25 and they talked about some power issues to the microphone on the Mac Mini... I did not quite get what they were discussing so i have a simple question :) Do any of you have an idea why my microphone doesn't work? Please help! :) Best Regards Kristian

    Read the article

  • Installing e text editor

    - by kristian nissen
    I am trying to get e-text editor to run. I read http://www.e-texteditor.com/forum/viewtopic.php?p=14953#14953 and Compile e-text editor on Linux as well. But on my 10.04 Lucid it fails at the following step: ./build_externals_linux.sh debug with the following error messages: Building debug binaries Building 32-bit binaries Going to place output in /opt/etexteditor/external/out.debug ./build_externals_linux.sh: line 41: pushd: bakefile: No such file or directory ./build_externals_linux.sh: line 42: ./configure: No such file or directory Cannot compile bakefile ./build_externals_linux.sh: line 46: popd: directory stack empty ./build_externals_linux.sh: line 49: pushd: metakit: No such file or directory ./build_externals_linux.sh: line 50: cd: builds: No such file or directory Cannot compile MetaKit ./build_externals_linux.sh: line 56: popd: directory stack empty ./build_externals_linux.sh: line 59: pushd: pcre: No such file or directory ./build_externals_linux.sh: line 60: ./configure: No such file or directory Cannot compile pcre ./build_externals_linux.sh: line 66: popd: directory stack empty ./build_externals_linux.sh: line 69: pushd: tinyxml: No such file or directory make: *** No rule to make target `clean'. Stop. cannot compile TinyXML ./build_externals_linux.sh: line 77: popd: directory stack empty ./build_externals_linux.sh: line 80: pushd: libtommath: No such file or directory make: *** No rule to make target `clean'. Stop. Cannot compile LTM ./build_externals_linux.sh: line 85: popd: directory stack empty ./build_externals_linux.sh: line 88: pushd: libtomcrypt: No such file or directory make: *** No rule to make target `clean'. Stop. Cannot compile LTC ./build_externals_linux.sh: line 93: popd: directory stack empty ./build_externals_linux.sh: line 96: pushd: wxwidgets: No such file or directory ./build_externals_linux.sh: line 97: ./configure: No such file or directory Cannot compile wxWidgets ./build_externals_linux.sh: line 104: popd: directory stack empty ./build_externals_linux.sh: line 107: pushd: webkit: No such file or directory make: *** No rule to make target `clean'. Stop. ./build_externals_linux.sh: line 109: ./WebKitTools/Scripts/build-webkit: No such file or directory Cannot compile WebKit ./build_externals_linux.sh: line 113: popd: directory stack empty what am I missing?

    Read the article

  • System crashes/lockups + compiz/cairo/gnome-panel crashing due to cached ram, please help?

    - by Kristian Thomson
    Can someone help me to troubleshoot system crashes and lockups which result in compiz/cairo dock and gnome-panel crashing? I also get no window borders after the crash and a lot of kernel memory errors. Logs are telling me that apps were killed due to not enough memory, but the system is caching like 14GB of my ram so I'm a bit stuck on what/how to stop it. I'm running Ubuntu 12.10 on a 2011 Mac Mini with 16 GB ram. Here's some of the logs that look like they could be causing trouble. I woke up this morning to find chrome/skype/cairo dock and a few others had been killed and here is what the log said. Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.959890] Out of memory: Kill process 12247 (chromium-browse) score 101 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.959893] Killed process 12247 (chromium-browse) total-vm:238948kB, anon-rss:17064kB, file-rss:20008kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.972283] Out of memory: Kill process 10976 (dropbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.972288] Killed process 10976 (dropbox) total-vm:316392kB, anon-rss:115484kB, file-rss:16504kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.975890] Out of memory: Kill process 10887 (rhythmbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.975895] Killed process 11515 (tray_icon_worke) total-vm:63336kB, anon-rss:15960kB, file-rss:11436kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.281535] Out of memory: Kill process 10887 (rhythmbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.281539] Killed process 10887 (rhythmbox) total-vm:528980kB, anon-rss:92272kB, file-rss:36520kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.283110] Out of memory: Kill process 10889 (skype) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.283113] Killed process 10889 (skype) total-vm:415056kB, anon-rss:84880kB, file-rss:22160kB I went to look deeper into things and saw that the whole time I'm having these kernel errors with out of memory and something mentioning radeon. I have a Radeon HD 6600M graphics card using the open source driver, not the proprietary one. I was wondering if perhaps using the proprietary one would solve the problem. Also, while writing this in Chrome rhythmbox and chrome just got killed while typing this, due to out of memory errors or so it reports, though I have 7 GB of free RAM at the time with 7 GB cached as well. Here is a full copy of my logs that happened in kern.log simply from when I began typing this question. http://pastebin.com/cdxxDktG Thanks in advance, Kris

    Read the article

  • PHP Development Environment (Host: Windows 7, Guest: Ubuntu)

    - by Kristian Leiws Jones
    Since editing files live from a remote server slows down development. I use XAMPP on windows to develop then run the web app's on a Linux server. However to avoid environment dependencies I'd like to mirror the live environment and the development environments. What I'm asking is running development server on Ubuntu inside VirtualBox whilst editing the source files via ftp/Dreamweaver is a good idea? If so, and I wanted to view the local website on the host OS (windows) how would I do this? does the guest OS have a LAN/Local IP address? I notice on windows "ipconfig /all" there are "tunneling" adapters which I assume is for VirtualBox, so I guess the guest OS has the same LAN/Local IP address? if so how would I view the websites hosted on the guest OS on the host OS? I'd also need to host FTP server on guest OS. Note: I need windows! I would love to use Linux all the way -.-

    Read the article

  • Out of space despite lots of free space remaining

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home What does this mean?

    Read the article

  • Is my computer slow due to lack of swap

    - by Kristian Jensen
    A few months ago, I installed Ubuntu 12.04 alongside with Windows 7 on my Asus EEE-PC 1015bx. It has a tendency of freezing and when trying to investigate I found that a swap partition of only 256 MB had been created. The Asus EEE-PC 1015bx is born with 1 GByte RAM only and it is not possible to add further or exchange the existing 1 GByte with a larger card. When looking at the system monitor, it looks like all swap is being utilized along with 70-75% of the RAM, even with very few applications running. Can the lack of much swap space be the reason for my computer running slowly and at times freezing? How can I add a swap partition? Or should I add a swap file instead? At the moment, I see two partitions when viewing the system monitor: one 28.6 GByte ext4 partition which must be the one containing Ubuntu and one 100 GByte fuseblk partition which I assume is the one holding Windows. It shows that I have 18.6 GByte free space on the ext4 partition. Can I "take a bite" from the ext4 partition and convert this into a swap partition? I was thinking something like 3 GBytes for swap considering my limited RAM. I hope that someone can guide me through. Thank you. 20th Oct 2012 - Further details Thank you for below answer which I find very useful. I am certainly considering switching to one of your suggested shells as I can see from the Internet that many have posted that these require much fewer resources than ubuntu. It seems to me that lubuntu is the perfect match for my very limited computer. I will have to wait a few days, though, as I am presently limited by a very slow and restricted Internet connection via satellite. But will lubuntu install as simply another shell replacing unity or will it replace ubuntu all together? Will the software that I have installed under ubuntu still be accessible in lubuntu? And can I return to ubuntu if required? Regarding the actual question of swap: When I run gparted, it shows me that there is one ntfs partition of 100 GBytes from where it boots and the before mentioned ext4 partition of 28.6 GBytes is not mentioned. Could it be that my ubuntu installation resides inside this 100 GBytes ntfs partiotion? And if so, can I take a bite of this for my swap partition? Realising that gparted is shown in Danish, I hope that you can make out what I mean. System monitoring shows below details: Once again I sincerely hope that you can help. Thank you.

    Read the article

  • Wireless card disappeards when restart ubuntu

    - by Kristian Jones
    I used 'additional drivers' to install 'Broadcom STA wireless driver' and it returns an error. WIthin jockey.log it says the following numerous times. 2011-02-14 21:24:06,945 DEBUG: BroadcomWLHandler enabled(): kmod disabled, bcm43xx: blacklisted, b43: blacklisted, b43legacy: blacklisted After it returns the error the network card will work temporarily until I restart the laptop. When I restart I got to go through the procedure again of trying to activate the driver, returns an error however it works temporarily. The network card is as follows on a Dell Inspiron 1545: Broadcom Corporation BCM4312 802.11b/g LP-PHY [14e4:4315] Rev 01 I have been trying to solve this myself for many hours. Any help is appreciated/

    Read the article

  • State of the art Culling and Batching techniques in rendering

    - by Kristian Skarseth
    I'm currently working with upgrading and restructuring an OpenGL render engine. The engine is used for visualising large scenes of architectural data (buildings with interior), and the amount of objects can become rather large. As is the case with any building, there is a lot of occluded objects within walls, and you naturally only see the objects that are in the same room as you, or the exterior if you are on the outside. This leaves a large number of objects that should be occluded through occlusion culling and frustum culling. At the same time there is a lot of repetative geometry that can be batched in renderbatches, and also a lot of objects that can be rendered with instanced rendering. The way I see it, it can be difficult to combine renderbatching and culling in an optimal fashion. If you batch too many objects in the same VBO it's difficult to cull the objects on the CPU in order to skip rendering that batch. At the same time if you skip the culling on the cpu, a lot of objects will be processed by the GPU while they are not visible. If you skip batching copletely in order to more easily cull on the CPU, there will be an unwanted high amount of render calls. I have done some research into existing techniques and theories as to how these problems are solved in modern graphics, but I have not been able to find any concrete solution. An idea a colleague and me came up with was restricting batches to objects relatively close to eachother e.g all chairs in a room or within a radius of n meeters. This could be simplified and optimized through use of oct-trees. Does anyone have any pointers to techniques used for scene managment, culling, batching etc in state of the art modern graphics engines?

    Read the article

  • Broadcom wireless card disappeards on restart

    - by Kristian Jones
    I used 'additional drivers' to install 'Broadcom STA wireless driver' and it returns an error. WIthin jockey.log it says the following numerous times. 2011-02-14 21:24:06,945 DEBUG: BroadcomWLHandler enabled(): kmod disabled, bcm43xx: blacklisted, b43: blacklisted, b43legacy: blacklisted After it returns the error the network card will work temporarily until I restart the laptop. When I restart I got to go through the procedure again of trying to activate the driver, returns an error however it works temporarily. The network card is as follows on a Dell Inspiron 1545: Broadcom Corporation BCM4312 802.11b/g LP-PHY [14e4:4315] Rev 01 I have been trying to solve this myself for many hours. Any help is appreciated/

    Read the article

  • Why does 12.04 upgrade abort with out of space error when I have lots of it?

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home The output of df -h Filesystem Size Used Avail Use% Mounted on /dev/sda7 9,3G 7,8G 1,1G 88% / udev 993M 4,0K 993M 1% /dev tmpfs 401M 884K 400M 1% /run none 5,0M 0 5,0M 0% /run/lock none 1003M 152K 1002M 1% /run/shm /dev/sda8 35G 4,0G 29G 13% /home /dev/sda2 101G 64G 37G 64% /media/A2C8E28BC8E25CD3 Running sudo fdisk -l gives Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 63 96389 48163+ de Dell Utility /dev/sda2 * 98304 210434488 105168092+ 7 HPFS/NTFS/exFAT /dev/sda3 210436094 312576704 51070305+ f W95 Ext'd (LBA) /dev/sda5 306279288 312576704 3148708+ dd Unknown /dev/sda6 210436096 214341631 1952768 82 Linux swap / Solaris /dev/sda7 214343680 233873407 9764864 83 Linux /dev/sda8 233875456 306278399 36201472 83 Linux Partition table entries are not in disk order

    Read the article

  • Installing Microsoft VX1000 webcam

    - by KRistian
    I found a tutorial with people saying it works; here are the instructions I followed. I opened a shell as root on my system and launched the following: wget http://linuxtv.org/hg/v4l-dvb/archive/tip.tar.gz tar zxvf tip.tar.gz cd v4l... (whatever the newly created directory name is) make all sudo make install Then I edited /etc/modprobe.d/blacklist-custom and added blacklist sn9c102. After reboot, I launched sudo gstreamer-properties. However when I type tar zxvf tip.tar.gz it displays: tar: You may not specify more than one `-Acdtrux' or `--test-label' option Try `tar --help' or `tar --usage' for more information. Why? How can I do this? Thanks in advance.

    Read the article

  • 301 redirect to 404 page?

    - by Kristian
    Currently i'm migrating www. prefix from my urls and use htaccess to do the job. Since we have new software and cleaned database some of the old urls doesnt exists anymore. Therefore some requests redirect to 404 page. 1. www.domain.com/old-page # htaccess redirect to non-www url, 301 2. domain.com/old-page # page does not exists, 404 Does this method have any SEO issues, or even affect pagerank? Or should i check the page existence before redirecting and show 404 without redirect?

    Read the article

  • Problems setting NTP sever with w32tm for a DC that is a Hyper-V guest

    - by R.Tonheim
    Hello ! I have tried to sett my DC to get its time from several NTP severs. I follow this answer (http://serverfault.com/questions/24298/w32time-sync-problems-for-hyper-v-guests-w32time-event-ids-38-24-29-35/24299#24299) to do it. First I disable Time Synchronization in the Hyper-V Integration Services for each guest. Then restart the Windows Time serviceon the guest. I had before this used this command: w32tm /config /manualpeerlist:"ntp.uio.no;timekeeper. uio.no;nissen.uio.no;0.no.pool.ntp.org;1.no.pool.ntp.org;2.no.pool.ntp.org" /syn cfromflags:manual /reliable:yes /update And the cmd sad: The command completed successfully. But the time was still 10 min wrong... I run w32tm again after restarted the DC without it having any effect. The w32tm /query /status still say: "Source: Local CMOS Clock" FROM MY CMD: Microsoft Windows [Version 6.0.6002] Copyright (c) 2006 Microsoft Corporation. All rights reserved. C:\Users\Administrator.MHGw32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 05.09.2009 20:06:21 Source: Local CMOS Clock Poll Interval: 6 (64s) C:\Users\Administrator.MHGw32tm /config /manualpeerlist:"ntp.uio.no;timekeeper. uio.no;nissen.uio.no;0.no.pool.ntp.org;1.no.pool.ntp.org;2.no.pool.ntp.org" /syn cfromflags:manual /reliable:yes /update The command completed successfully. C:\Users\Administrator.MHGw32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 05.09.2009 20:06:21 Source: Local CMOS Clock Poll Interval: 6 (64s) C:\Users\Administrator.MHG

    Read the article

  • replace windows xp with ubuntu no usb boot option and no cd?

    - by kristian nissen
    I have an old laptop running windows xp, I want to replace it with the latest version of ubuntu. The problem is that the laptop does not support boot from usb - I checked the BIOS but the option is not working, I tried http://www.pendrivelinux.com/testing-your-system-for-usb-boot-compatibility/ but with no luck. I also tried burning the iso on a dvd but the burn process keeps failing. What options do I have? Isn't it possible to install ubuntu somehow by downloading it and replacing it with the current OS?

    Read the article

  • Spree how to use Hooks

    - by kristian nissen
    According to http://spreecommerce.com/documentation/theming.html#hooks you should be able to use spree hooks using my_theme_hooks.rb but how? Do I need to extend a class and where can I check which hooks are used in the default theme?

    Read the article

  • sproutcore - todos tutorial, addbutton not responding in firefox

    - by kristian nissen
    I'm testing the sproutcore todo's tutorial and I have checked the code in step-5 and it's identical to my code at least as far as I can see, but the addButton is not responding to click events. addTask: function () { var task; task = Sinatra.store.createRecord(Sinatra.Task, { 'description': 'New Task', 'isDone': false, 'priority': 1 }); this.selectObject(task); this.invokeLater(function () { var contentIndex = this.indexOf(task); var list = Sinatra.mainPage.getPath('mainPane.middleView.contentView') var listItem = list.itemViewForContentIndex(contentIndex); listItem.beginEditing(); }); return YES; and in the main: addButton: SC.ButtonView.design({ layout: { centerY: 0, height: 24, right: 12, width: 100 }, title: 'Add Task', target: 'Sinatra.tasksController', action: 'addTask' }), I can't see the problem, please help. (I have only tested this in firefox on kubuntu)

    Read the article

  • Drupal 6 node_view empty

    - by kristian nissen
    I'm trying to produce a page with a list of specific nodes but the node_view returns an empty string. This is my query: function events_upcoming() { $output = ''; $has_events = false; $res = pager_query(db_rewrite_sql("SELECT n.nid, n.created FROM {node} n WHERE n.type = 'events' AND n.status = 1 ORDER BY n.sticky DESC, n.created DESC"), variable_get('default_nodes_main', 10)); while ($n = db_fetch_object($res)) { $output .= node_view(node_load($n->nid), 1); $has_events = true; } if ($has_events) { $output .= theme('pager', NULL, variable_get('default_nodes_main', 10)); } return $output; } hook_menu (part of): 'events/upcoming' => array( 'title' => t('Upcoming Events'), 'page callback' => 'events_upcoming', 'access arguments' => array('access content'), 'type' => MENU_SUGGESTED_ITEM ), the implementation of hook_view: function events_view($node, $teaser = false, $page = false) { $node = node_prepare($node, $teaser); if ($page) { // TODO: Handle breadcrump } return $node; } now, if I add a var_dump($node) inside events_view the node is present and I can see the values I want, and if I add a var_dump inside while loop in events_upcoming I also get a node id from the query. the strange thing is, when I load localhost/events/upcoming I see the pager and nothing else. I have used the blog.module as a reference, but what am I missing here?

    Read the article

  • Rails shoulda and factory_girl setup

    - by kristian nissen
    I have installed both shoulda and factory_girl, I can run shoulda just fine, but when I add this: require 'factory_girl' Factory.define :user do |u| u.mail '[email protected]' u.pass 'secret' end to my test/test_helper.rb I'm getting this error: /test/test_helper.rb:1:in `require': no such file to load -- factory_girl (LoadError) when I execute rake test:units I have installed both gems using: sudo gem install thoughtbot-shoulda --source=http://gems.github.com sudo gem install thoughtbot-factory_girl --source=http://gems.github.com and can see both of them being installed fine. And by the way, this works fine as well: script/console Loading development environment (Rails 2.3.8) require 'factory_girl' = [] so requiring the gems seems to be working

    Read the article

  • http_post_data basic authentication?

    - by kristian nissen
    I have a remote service that I need to access, according to the documentation it's restricted using basic authentication and all requests have to be posted (HTTP POST). The documentation contains this code example - VB script: Private Function SendRequest(ByVal Url, ByVal Username, ByVal Password, ByVal Request) Dim XmlHttp Set XmlHttp = CreateObject("MSXML2.XmlHttp") XmlHttp.Open "POST", Url, False, Username, Password XmlHttp.SetRequestHeader "Content-Type", "text/xml" XmlHttp.Send Request Set SendRequest = XmlHttp End Function how can I accomplish this in PHP? When I post data to the remote server it replies: 401 Unauthorized Access which is fine because I'm not posting my user/pass just the data. Bu when I add my user/pass as it's describe here: http://dk.php.net/manual/en/http.request.options.php like this: $res = http_post_data('https://example.com', $data, array( 'Content-Type: "text/xml"', 'httpauth' => base64_encode('user:pass'), 'httpauthtype' => HTTP_AUTH_BASIC ) ); the protocol is https - I get a runtime error in return (it's a .Net service). I have tried it without the base64_encode but with the same result.

    Read the article

  • htaccess rewrite rule loads assets twice?

    - by kristian nissen
    I am using these rules: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !\..+$ RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*+)$ /$1/ [L,R] RewriteRule !\.(js|ico|gif|jpg|png|css|html|swf|flv|xml)$ index.php But when I check resources loaded in chrome, I can see that my .css files are loaded twice.

    Read the article

  • How to run a shell command and selectively ignore the status?

    - by Walter Nissen
    I've got a shell script that I would like to stop with an error on nonzero status most of the time, but in some cases I want to ignore it. For example: #!/bin/tcsh -vxef cp file/that/might/not/exist . #Want to ignore this status cp file/that/might/not/exist . ; echo "this doesn't work" cp file/that/must/exist . #Want to stop if this status is nonzero

    Read the article

  • Rails will_paginate custom route

    - by kristian nissen
    How can I use will_paginate with a custom route? I have the following in my routes: map.connect 'human-readable/:name', :controller => :tags, :action => 'show' but will_paginate uses url_for as far as I can tell, but I want to use 'human-readable' instead of url_for, but how?

    Read the article

1 2 3  | Next Page >