Search Results

Search found 20433 results on 818 pages for 'marketing always wins'.

Page 151/818 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Caching query results in django

    - by Marcio Cruz
    I'm trying to find a way to cache the results of a query that won't change with frequency. For example, categories of products from an e-commerce (cellphones, TV, etc). I'm thinking of using the template fragment caching, but in this fragment, I will iterate over a list of these categories. This list is avaliable in any part of the site, so it's in my base.html file. Do I have always to send the list of categories when rendering the templates? Or is there a more dynamic way to do this, making the list always available in the template?

    Read the article

  • Create SAMBA node trust relationship to Windows 2003 PDC server

    - by Rod Regier
    I am having problems creating a trust relationship between an OpenVMS/IA64 node running V/IA64 8.3-1H1, TCPIP 5.6 ECO 5, CIFS 1.1 ECO1 PS11 (SAMBA 3.0.28a) and Windows 2003 server running as a PDC. I do have two other OpenVMS/Alpha nodes running V/A 8.3, TCPIP 5.6 ECO 4, CIS 1.1 ECO1 PS10 (SAMBA 3.0.28a) with working trust relationships to the same Windows 2003 server. Looking for assistance in resolving the trust "handshake". \\ Details from failing node. Unless otherwise noted, corresponding files on working nodes are similar or identical. SMB.CONF extract: [global] server string = Samba %v running on %h (OpenVMS) workgroup = WILMA netbios name = %h security = DOMAIN encrypt passwords = Yes name resolve order = lmhosts host wins bcast Password server = * log file = /samba$log/log.%m printcap name = /sys$manager/ucx$printcap.dat guest account = DYMAX print command = print %f/queue=%p/delete/passall/name="""""%s""""" lprm command = delete/entry=%j map archive = No printing = OpenVMS net rpc testjoin [2010/08/13 16:09:28, 0] SAMBA$SRC:[SOURCE.RPC_CLIENT]CLI_PIPE.C;1:(2443) get_schannel_session_key: could not fetch trust account password for domain 'WILMA' [2010/08/13 16:09:28, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC_JOIN.C;1:(72) net_rpc_join_ok: failed to get schannel session key from server W2K3AD2 for domain WILMA. Error was NT_STATUS_CANT_ACCESS_DOMAIN_I NFO Join to domain 'WILMA' is not valid net rpc join "-Uaccount%password" tdb_open_isam: error verifying status of file SAMBA$ROOT:[PRIVATE]secrets.tdb tdb_open_isam: errno value = 1 [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.PASSDB]SECRETS.C;1:(72) Failed to open /SAMBA$ROOT/PRIVATE/secrets.tdb [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC.C;1:(322) error storing domain sid for WILMA tdb_open_isam: error verifying status of file SAMBA$ROOT:[PRIVATE]secrets.tdb tdb_open_isam: errno value = 1 [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.PASSDB]SECRETS.C;1:(72) Failed to open /SAMBA$ROOT/PRIVATE/secrets.tdb [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC_JOIN.C;1:(409) error storing domain sid for WILMA Unable to join domain WILMA. \\ Example from other node: net rpc testjoin Join to 'WILMA' is OK

    Read the article

  • Samba authentication problem when attempting to connect from Windows client

    - by Camsoft
    I've got a Linux server running Ubuntu and Samba. I've created two shares in Samba that point to directories that are owned by the user "cameron". When I attempt to connect to these shares on Windows 7 is connects and allows me to see the files but they are read-only. This is the desired action for guest users but not for authenticated users. My user on the Windows client is "Cameron" and has the same password as the Linux user "cameron". I don't think my Windows user has authenticated against the Linux user. I even created a users.map file to map the user cameron (linux) to Cameron (windows) but still it does not work. Here is my samba config file (UPDATED): [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . username map = /etc/samba/users.map syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 os level = 65 preferred master = Yes dns proxy = No wins support = Yes usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d valid users = cameron write list = cameron [www] path = /usr/local/apache2/htdocs write list = @www-data force group = www-data guest ok = Yes [cameron] path = /home/cameron write list = @www-data force group = www-data guest ok = Yes

    Read the article

  • Unable to get the Current User's Token information

    - by Ram
    Hi, I have been trying to get the currently logged-in user's token information using the following code : [DllImport("wtsapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern bool WTSQueryUserToken(int sessionId, out IntPtr tokenHandle); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern uint WTSGetActiveConsoleSessionId(); static void Main(string[] args) { try { int sessionID = (int)WTSGetActiveConsoleSessionId(); if (sessionID != -1) { System.IntPtr currentToken = IntPtr.Zero; bool bRet = WTSQueryUserToken(sessionID, out currentToken); Console.WriteLine("bRet : " + bRet.ToString()); } } catch (Exception) { } } The problem is that, bRet is always false and currentToken is always 0. I am getting the sessionid as 1. Could someone tell me what's going wrong here? I want to use this token information to pass it to the CreateProcessAsUser function from a windows service. Thanks, Ram

    Read the article

  • How can I listen to multiple Serial Ports asynchronously in C#

    - by Kamiel Wanrooij
    I have an application that listens to a piece of hardware on a USB to Serial converter. My application should monitor more than one serial port at the same time. I loop the serial ports I need to listen to, and create a thread for each port. In the thread I have my data handing routine. When I assign one port, it runs flawlessly. When I listen to the other one, it also works. When I open both ports however, the second port always throws an UnauthorizedAccessException when calling serialPort.Open(). It does not matter in what order I open the ports, the second one always fails. I listen to the ports using serialPort.ReadLine() in a while loop. Can .NET open more than one port at the same time? Can I listen to both? Or should I use another (thread safe?) way to access my serial port events?

    Read the article

  • How much abstraction is too much?

    - by Daniel Bingham
    In an Object Oriented Program: How much abstraction is too much? How much is just right? I have always been a nuts and bolts kind of guy. I understood the concept behind high levels of encapsulation and abstraction, but always felt instinctively that adding too much would just confuse the program. I always tried to shoot for an amount of abstraction that left no empty classes or layers. And where in doubt, instead of adding a new layer to the hierarchy, I would try and fit something into the existing layers. However, recently I've been encountering more highly abstracted systems. Systems where everything that could require a representation later in the hierarchy gets one up front. This leads to a lot of empty layers, which at first seems like bad design. However, on second thought I've come to realize that leaving those empty layers gives you more places to hook into in the future with out much refactoring. It leaves you greater ability to add new functionality on top of the old with out doing nearly as much work to adjust the old. The two risks of this seem to be that you could get the layers you need wrong. In this case one would wind up still needing to do substantial refactoring to extend the code and would still have a ton of never used layers. But depending on how much time you spend coming up with the initial abstractions, the chance of screwing it up, and the time that could be saved later if you get it right - it may still be worth it to try. The other risk I can think of is the risk of over doing it and never needing all the extra layers. But is that really so bad? Are extra class layers really so expensive that it is much of a loss if they are never used? The biggest expense and loss here would be time that is lost up front coming up with the layers. But much of that time still might be saved later when one can work with the abstracted code rather than more low level code. So when is it too much? At what point do the empty layers and extra "might need" abstractions become overkill? How little is too little? Where's the sweet spot? Are there any dependable rules of thumb you've found in the course of your career that help you judge the amount of abstraction needed?

    Read the article

  • pfsense 2.0.1 Firewall SMB Share not showing up under network

    - by atrueresistance
    I have a freenas NAS with a SMB share running at 192.168.2.2 of a 192.168.2.0/28 network. Gateway is 192.168.2.1. Originally this was running on a switch with my LAN, but now having upgraded to new hardware the Freenas has it's own port on the firewall. Before the switch the freenas would show up under Network on a windows 7 box and an OSX Lion box as freenas{wins} or CIFS shares on freenas{osx} so I know it doesn't have anything do to with the freenas. Here are my pfsense rules. ID Proto Source Port Destination Port Gateway Queue Schedule Description PASS TCP FREENAS net * LAN net 139 (NetBIOS-SSN) * none cifs lan passthrough PASS TCP FREENAS net * LAN net 389 (LDAP) * none cifs lan passthrough PASS TCP FREENAS net * LAN net 445 (MS DS) * none cifs lan passthrough PASS UDP FREENAS net * LAN net 137 (NetBIOS-NS) * none cifs lan passthrough PASS UDP FREENAS net * LAN net 138 (NetBIOS-DGM) * none cifs lan passthrough BLOCK * FREENAS net * LAN net * * none BLOCK * FREENAS net * OPTZONE net * * none BLOCK * FREENAS net * 192.168.2.1 * * none PASS * FREENAS net * * * * none BLOCK * * * * * * none I can connect if I use \\192.168.2.2 and enter the correct login details. I would just like this to show up on the network. Nothing in the log seems to be blocked when I filter by 192.168.2.2. What port am I missing for SMB to show up under the network and not have to connect by IP? ps. Do I really need the LDAP rule?

    Read the article

  • [iPhone] Objective-C : UIScrollView manual paging

    - by Daniel
    Hi Folks! I want to use the scrollview as something like a picker in horizontal mode. The scrollview holds up to seven subviews. Each subview represents a value. Always three views are visible and the one in the middle is the selected one. Scrollview visible at start: __ | V1 | V2 Scrollview set to view/value two: V1 | V2 | V3 Scrollview set to last value: V2 | V3 | __ The real problem I have got is the "pagingEnabled" flag. If pagingEnabled is set to YES the scrollview pages always three subviews/values instead of only one. If pagingEnabled is set to NO the scrollview does not clinch. Is there a nice solution for my problem? Thanks a lot, Dan ;)

    Read the article

  • How does ListView.ItemCollection.Contains() work?

    - by Inno
    Hi everyone, I'm copying ListViewItems from one ListView to another, sth. like: foreach (ListViewItem item in usersListView.SelectedItems) { selectedUsersListView.Items.Add((ListViewItem)item.Clone()); } If I want to use ListView.ItemCollection.Contains() to determine if an item was already copied I always get false: foreach (ListViewItem item in usersListView.SelectedItems) { if (!selectedUsersListView.Items.Contains(item) { // always !false selectedUsersListView.Items.Add((ListViewItem)item.Clone()); } } I did the following to solve my problem: foreach (ListViewItem item in usersListView.SelectedItems) { ListViewItem newItem = (ListViewItem)item.Clone(); newItem.Name = newItem.Text; if (!selectedUsersListView.Items.ContainsKey(newItem.Name) { // this works selectedUsersListView.Items.Add(newItem); } } So, it's ok that this solves my problem but I still have no idea why ListView.ItemCollection.Contains() doesn't work... How does ListView.ItemCollection.Contains() identify if an item already exists? How do ListViewItems have to be initialised that ListView.ItemCollection.Contains() (not ListView.ItemCollection.ContainsKey()) is able to identify them?

    Read the article

  • How to get rid of Drupal CSS stylesheets?

    - by unforgiven
    I am trying to accomplish the following. I need to use Drupal 6 as a project requirement, but I want to use it with my own HTML and CSS stylesheets for each node/view/panel etc. The problem is, whatever the theme, I always found that Drupal applies to my HTML content both my CSS stylesheets and the CSS related to the theme chosen. I have also tried, without success, using the stylestripper module (installed in sites/all/modules). No matter what I do, additional CSS stylesheets are applied to my pages, completely destroying my layout. What is the proper way to achieve this? Why stylestripper does not work at all? Is there a completely blank theme available? I have tried basic, mothership, zen etc, but I always see additional CSS stylesheets applied to my pages. This is driving me crazy, Drupal was chosen by someone else for its flexibility. Thank you in advance.

    Read the article

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • The relationship between OPC and DCOM

    - by typoknig
    Hi all, I am trying to grasp the link between OPC and DCOM. I have watched all four of the tutorials here and I think I have a good feeling for what OPC is, but in one of the tutorials (the third one 35 seconds in) the narrator states that OPC is based on DCOM, but I do not understand how the two are really linked. My confusion comes from a question my professor posed in which he asked "How and where would you deploy OPC instead of DCOM and vice-versa." His question makes it seem like the two are not as linked as my research suggests. I'm not looking for anyone to answer the question, I just want to know the relation between OPC and DCOM, then I can figure the rest out. Specifically I would like to know if: 1.) One is always based on the other 2.) One can always be deployed without the other.

    Read the article

  • Project References DLL version hell

    - by Mr Shoubs
    We're having problems getting visual studio to pick up the latest version of a DLL from one of our projects. We have multiple class library projects (e.g. BusinessLogic, ReportData) and a number of web services, each has a reference to a Connectivity DLL we've written (this ref to the connectivity DLL is the problem). We always point references to the DLL in the bin/debug folder, (which is where we always build to for any given project) and all custom DLL references have CopyLocal = True and SpecificVersion = False ReportData has a reference to business logic (which also has a reference to connectivity - I don't see why this should cause a problem, but thought it is worth mentioning) The weird thing is, when you click "Add Reference" and browse to Connectivity/bin/debug - you hover the mouse over the DLL file, the correct (latest) version is shown (version and file version are always incremented together), but when you click ok, a previous version number is pulled though. Even when I look in the current projects debug folder (where copy local would put the DLL after compiling) that shows the latest version number. - NO WHERE does can I find the previous version of the DLL outside of visual studio, but in that project references it has the old version - even though the path is correct. I'm at a loss as to where it might be getting the old versions from. Or even why it wants that one. This is possibly the most frustraighting problem I have ever come across. Does anyone know how to ensure the latest version is pulled through (preferably automatically or on compile). EDIT: Although not exactly the scenario I'm dealing with I was reading this article and somewhere it mentions about CLR ignoring revision numbers. Understandable (even though this hasn't been a problem before - we're on revision 39), so I thought I would update the build number, still didn't work. In a vain attempt I though I would update the minor version number and see if that made any difference. I'm not saying this is the answer as I have to check quite a few things first, but on the face of it, this seems to have solved my problem... Further edit: In other class libraries this seems to have solved the problem, however in a test windows application it still pulls a previous version through :( If I increment the minor version number again, the same problem come back and I am left with the wrong version being pulled though. Further Edit - I created an entirly new project, added a reference and still had the exact same problem. This suggests the problem is restriced to the project I am referencing. Wish I knew why! Anyone had this problem before and know how to get around it? HELP!

    Read the article

  • MySQLi PHP: Check if SQL INSERT query was fully successful using MySQLi

    - by Jonathan
    Hi, I have this big function that gets a lot of different data and insert it into multiple tables.. Not all the data is always available so not all the SQL INSERT queries are successful. I need to check which SQL INSERT query was fully successful and which wasn't to the do something with this data (like inserting into a log table or similar). Just to give you an example of how I think it can be done: $sql = 'INSERT INTO data_table (ID, column1, column2) VALUES(?, ?, ?)'; if ($stmt->prepare($sql)) { $stmt->bind_param('iss', $varID, $var1, $var2); if ($stmt->execute()) { $success == TRUE; //or something like that } } I'm not completely sure this is the best way and if its always really show if the data was inserted into the table... Any suggestions??

    Read the article

  • PECL install fails

    - by James
    Hey! I have browsed every Google result, read all the forum posts about this error, but I cannot solve it. When using PECL install for anything, I always end up getting this error: checking whether the C compiler works... configure: error: cannot run C compiled programs. Everything else succeeds up to that point them bam! I'm using CentOS 4.3, PEAR is the latest stable version, GCC is a stable and recent version. Everything is working as it should, but the C compiler always seems to error. I've tried to make tmp have the right privilages for the operation by temporarily enabling it using: mount -o remount,exec,suid /tmp But that doesn't work. I've literally tried everything that has been suggested by to no avail. Any ideas?

    Read the article

  • UART speed possibly wrong

    - by Mike
    My brain is fried, so I thought I would pass this one to the community. When sending 1 character to my embedded system, it consistently thinks it receives 2 characters. The first received character seems to map to the transmitted character (in some unkown way) and the second received character is always 0xff Here is what I observed: Tx char (in hex) Rx character (in hex), I left out the second byte (always ff) 31 9D 32 9B 33 99 61 3D 62 3B 63 39 64 37 65 35 41 7D 42 7B 43 79 I have check my clocks and them seem to be ok. The only diff between this non working version and the previous version is that i am now using a RS485 chip. I have traced the signal all the way up to the MCU and it looks fine (confirmed the bit value on the rx pin)

    Read the article

  • jquery and requirejs and knockout; reference requirejs object from within itself

    - by Thomas
    We use jquery and requirejs to create a 'viewmodel' like this: define('vm.inkoopfactuurAanleveren', ['jquery', 'underscore', 'ko', 'datacontext', 'router', 'messenger', 'config', 'store'], function ($, _, ko, datacontext, router, messenger, config, store) { var isBusy = false, isRefreshing = false, inkoopFactuur = { factuurNummer: ko.observable("AAA") }, activate = function (routeData, callback) { messenger.publish.viewModelActivated({ canleaveCallback: canLeave }); getNewInkoopFactuurAanleveren(callback); var restricteduploader = new qq.FineUploader({ element: $('#restricted-fine-uploader')[0], request: { endpoint: 'api/InkoopFactuurAanleveren', forceMultipart: true }, multiple: false, failedUploadTextDisplay: { mode: 'custom', maxChars: 250, responseProperty: 'error', enableTooltip: true }, text: { uploadButton: 'Click or Drop' }, showMessage: function (message) { $('#restricted-fine-uploader').append('<div class="alert alert-error">' + message + '</div>'); }, debug: true, callbacks: { onComplete: function (id, fileName, responseJSON) { var response = responseJSON; }, } }); }, invokeFunctionIfExists = function (callback) { if (_.isFunction(callback)) { callback(); } }, loaded = function (factuur) { inkoopFactuur = factuur; var ids = config.viewIds; ko.applyBindings(this, getView(ids.inkoopfactuurAanleveren)); /*<----- THIS = OUT OF SCOPE!*/ / }, bind = function () { }, saved = function (success) { var s = success; }, saveCmd = ko.asyncCommand({ execute: function (complete) { $.when(datacontext.saveNewInkoopFactuurAanleveren(inkoopFactuur)) .then(saved).always(complete); return; }, canExecute: function (isExecuting) { return true; } }), getView = function (viewName) { return $(viewName).get(0); }, getNewInkoopFactuurAanleveren = function (callback) { if (!isRefreshing) { isRefreshing = true; $.when(datacontext.getNewInkoopFactuurAanleveren(dataOptions(true))).then(loaded).always(invokeFunctionIfExists(callback)); isRefreshing = false; } }, dataOptions = function (force) { return { results: inkoopFactuur, // filter: sessionFilter, //sortFunction: sort.sessionSort, forceRefresh: force }; }, canLeave = function () { return true; }, forceRefreshCmd = ko.asyncCommand({ execute: function (complete) { //$.when(datacontext.sessions.getSessionsAndAttendance(dataOptions(true))) // .always(complete); complete; } }), init = function () { // activate(); // Bind jQuery delegated events //eventDelegates.sessionsListItem(gotoDetails); //eventDelegates.sessionsFavorite(saveFavorite); // Subscribe to specific changes of observables //addFilterSubscriptions(); }; init(); return { activate: activate, canLeave: canLeave, inkoopFactuur: inkoopFactuur, saveCmd: saveCmd, forceRefreshCmd: forceRefreshCmd, bind: bind, invokeFunctionIfExists: invokeFunctionIfExists }; }); On the line ko.applyBindings(this, getView(ids.inkoopfactuurAanleveren)); in the 'loaded' method the 'this' keyword doens't refer to the 'viewmodel' object. the 'self' keyword seems to refer to a combination on methods found over multiple 'viewmodels'. The saveCmd property is bound through knockout, but gives an error since it cannot be found. How can the ko.applyBindings get the right reference to the viewmodel? In other words, with what do we need to replace the 'this' keyword int he applyBindings. I would imagine you can 'ask' requirejs to give us the ealiers instantiated object with identifier 'vm.inkoopfactuurAanleveren' but I cannot figure out how.

    Read the article

  • Command Line PHP with shell_exec works for root but not others

    - by Kristopher Ives
    I have a very simple script that is to test if running a shell_exec (or backtick operator) basically works: #!/usr/bin/php5-cli <?php echo "This is a PHP script\n"; echo `ls -l /home/stoysnet/`; Unless I run this as root, it always gives me: $ ./foo.php This is a PHP script Warning: _shell_exec(): Permission Denied in /home/stoysnet/foo.php on line 5 I've tried running this via PHP in a few different ways, but I always get the same error. However, when I put the script into a subdirectory of /etc/ owned by root:root and executed as root it works. What gives?

    Read the article

  • Enabling Samba Shares Across Subnets

    - by John
    I was curious how I could go about setting up SAMBA so that shares could be seen and used across different subnets. We have some Linux devices that are bound to Active Directory and we would like to have them serve SAMBA shares to clients that will reside in a different subnet than what the servers reside in? Is there any way to do this without needing to setup a WINS server or use legacy NetBIOS methods since the majority of our clients are Windows 7, Windows Server 2003, Windows Server 2008, and Macintosh OS X (10.6 or newer)? EDIT Right now, only clients in the same subnet as the SAMBA server can see the shares. Clients outside of the subnet (i.e. the client subnet) cannot see or connect to the share. The error returned is: The specified network name is no longer available. It does not seem to matter if I use IP, FQDN, or NetBIOS name to try and connect to the share with. We have a common Cisco router handling the inter-subnet routing. Everything else seems to work correctly with this network setup and the device can be pinged from multiple subnets. I also do not believe it to be a firewall type of issue since the rules for this segment are rather lax.

    Read the article

  • Let system time determine animation speed, not program FPS

    - by Anders
    I'm writing a card game in ActionScript 3. Each card is represented by an instance of a class extending movieclip exported from Flash CS4 that contains the card graphics and a flip animation. When I want to flip a card I call gotoAndPlay on this movieclip. When the frame rate slows down all animations take longer to finish. It seems Flash will by default animate movieclips in a way that makes sure all frames in the clip will be drawn. Therefor, when the program frame rate goes below the frame rate of the clip, the animation will be played at a slower pace. I would like to have an animation always play at the same speed and as a consequence always be shown on the screen for the same amount of time. If the frame rate is too slow to show all frames, frames are dropped. Is is possible to tell Flash to animate in this way? If not, what is the easiest way to program this behavior myself?

    Read the article

  • Named pipe is not flushing in Python

    - by BrainCore
    I have a named pipe created via the os.mkfifo() command. I have two different Python processes accessing this named pipe, process A is reading, and process B is writing. Process A uses the select function to determine when there is data available in the fifo/pipe. Despite the fact that process B flushes after each write call, process A's select function does not always return (it keeps blocking as if there is no new data). After looking into this issue extensively, I finally just programmed process B to add 5KB of garbage writes before and after my real call, and likewise process A is programmed to ignore those 5KB. Now everything works fine, and select is always returning appropriately. I came to this hack-ish solution by noticing that process A's select would return if process B were to be killed (after it was writing and flushing, it would sleep on a read pipe). Is there a problem with flush in Python for named pipes?

    Read the article

  • How to get an array from a database in Rails3 even when there's only one record?

    - by yuval
    In Rails3, I have the following line: @messages = Message.where("recipient_deleted = ?", false).find_by_recipient_id(@user.id) In my view, I loop through @messages and print out each message, as such: <% for message in @messages %> <%= message.sender_id %> <%= message.created_at %> <%= message.body %> <% end %> This works flawlessly when there are several messages. The problem is that when I have one message, I get an error thrown at me: undefined methodeach'` How do I force rails to always return an array of messages even if there's only one message so that each always works? Thanks!

    Read the article

  • Nginx's speed, and how to replicate it [migrated]

    - by Mediocre Gopher
    I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this is this thread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason. I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test. What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?

    Read the article

  • Underwriting in a New Frontier: Spurring Innovation

    - by [email protected]
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Susan Keuer, product strategy manager for Oracle Insurance, shares her experiences and insight from the 2010 Association of Home Office Underwriters (AHOU) Annual Conference, April 11-14, in San Antonio, Texas    How can I be more innovative in underwriting?  It's a common question I hear from insurance carriers, producers and others, so it was no surprise that it was the key theme at the recent 2010 AHOU Annual Conference.  This year's event drew more than 900 insurance professionals involved in the underwriting process across life and annuities, property and casualty and reinsurance from around the globe, including the U.S., Canada, Australia, Bahamas, and more, to San Antonio - a Texas city where innovation transformed a series of downtown drainage canals into its premiere River Walk tourist destination.   CNN's Medical Correspondent Dr. Sanjay Gupta kicked off the conference with a phenomenal opening session that drove home the theme of the conference, "Underwriting in a New Frontier:  Spurring Innovation."   Drawing from his own experience as a neurosurgeon treating critically injured medical patients in the field in Iraq, Gupta inspired audience members to think outside the box during the underwriting process. He shared a compelling story of operating on a soldier who had suffered a head-related trauma in a field hospital.  With minimal supplies available Gupta used a Black and Decker saw to operate on the soldier's head and reduce pressure on his swelling brain. Drawing from this example, Gupta encouraged underwriters to think creatively, be innovative, and consider new tools and sources of information, such as social networking sites, during the underwriting process. So as you are looking at risk take into consideration all resources you have available.    Gupta also stressed the concept of IKIGAI - noting that individuals who believe that their life is worth living are less likely to die than are their counterparts without this belief.  How does one quantify this approach to life or thought process when evaluating risk?  Could this be something to consider as a "category" in the near future? How can this same belief in your own work spur innovation?   The role of technology was a hot topic of discussion throughout the conference.  Sessions delved into the latest in underwriting software to the rise of social media and how it is being increasingly integrated into underwriting process and solutions.  In one session a trio of panelists representing the carrier, producer and vendor communities stressed the importance to underwriters of leveraging new technology and the plethora of online information sources, which all could be used to accurately, honestly and consistently evaluate the risk throughout the underwriting process.   Another focused on the explosion of social media noting:  1.    Social media is growing exponentially - About eight percent of Americans used social media five years ago. Today about 46 percent of Americans do so, with 85 percent of financial services professionals using social media in their work.  2.    It will impact your business - Underwriters reconfirmed over and over that they are increasingly using "free" tools that are available in cyberspace in lieu of more costly solutions, such as inspection reports conducted by individuals in the field.  3.    Information is instantly available on the Web, anytime, anywhere - LinkedIn was mentioned as a way to connect to peers in the underwriting community and producers alike.  Many carriers and agents also are using Facebook to promote their company to customers - and as a point-of-entry to allow them to perform some functionality - such as accessing product marketing information versus directing users to go to the carrier's own proprietary website.  Other carriers have released their tight brand marketing to allow their producers to drive more business to their personal Facebook site where they offer innovative tools such as Application Capture or asking medical information in a more relaxed fashion.     Other key topics at the conference included the economy, ongoing industry consolidation, real-estate valuations as an asset and input into the underwriting process, and producer trends.  All stressed a "back to basics" approach for low cost, term products.   Finally, Connie Merritt, RN, PHN, entertained the large group of atttendees with audience-engaging insight on how to "Tame the Lions in Your Life - Dealing with Complainers, Bullies, Grump and Curmudgeon." Merritt noted "we are too busy for our own good." She shared how her overachieving personality had impacted her life.  Audience members then were asked to pick red, yellow, blue, or green shapes, without knowing that each one represented a specific personality trait.  For example, those who picked blue were the peacemakers. Those who choose yellow were social - the hint was to "Be Quiet Longer."  She then offered these "lion taming" steps:   1.    Admit It 2.    Accept It 3.    Let Go 4.    Be Present (which paralleled Gupta's IKIGAI concept)   When thinking about underwriting I encourage you to be present in the moment and think creatively, but don't be afraid to look ahead to the future and be an innovator.  I hope to see you at next year's AHOU Annual Conference, May 1-4, 2011 at The Mirage in Las Vegas, Nev.     Susan Keuer is the product strategy manager for new business underwriting.  She brings more than 20 years of insurance industry experience working with leading insurance carriers and technology companies to her role on the product strategy team for life/annuities solutions within the Oracle Insurance Global Business Unit  

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >