Search Results

Search found 8692 results on 348 pages for 'per magnusson'.

Page 29/348 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Windows 8.1 will not go back to sleep after waking up

    - by per
    I have problems putting Windows to sleep and starting the screen saver on my new Windows 8.1 machine. Sleep mode and screen savers work only when the computer is first powered up (or restarted). But once it goes to sleep (manually or automatically) and I wake it up later, it wont go back to sleep again and I can't use screen savers either. I updated the chipset and graphics card drivers. My computer isn't part of a homegroup either. Does anyone else have similar issues? Thanks for your advice, per

    Read the article

  • Oracle annuncia la nuova release di Oracle Hyperion EPM System

    - by Stefano Oddone
    Lo scorso 4 Aprile, durante l'Oracle Open World tenutosi a Tokyo, Mark Hurd, Presidente di Oracle, ha annunciato l'imminente rilascio della release 11.1.2.2 di Oracle Hyperion Enterprise Performance Managent System, la piattaforma leader nel mercato mondiale dell'EPM. La nuova release introduce un insieme estremamente significativo di nuovi moduli, migliorie a moduli esistenti, evoluzioni tecnologiche e funzionali che incrementano ulteriormente il valore ed il vantaggio competitivo fornito dall'offerta Oracle. Tra le principali novità in evidenza: introduzione del nuovo modulo Oracle Hyperion Project Financial Planning, verticalizzazione per la pianificazione economico-finanziaria, il funding ed il budgeting di progetti, iniziative, attività, commesse arricchimento di Oracle Hyperion Planning con funzionalità built-in a supporto del Predictive Planning e del Rolling Forecast per supportare processi di budgeting e forecasting sempre più flessibili, frequenti ed efficaci introduzione del nuovo modulo Oracle Account Reconciliation Manager per la gestione dell'intero ciclo di vita delle attività di riconciliazione dei conti tra General Ledger e Sub-Ledger o tra sistemi contabili differenti arricchimento di Oracle Hyperion Financial Management con un'interfaccia web totalmente nuova e l'introduzione della Smart Dimensionality, ovvero la possibilità di definire modelli con più delle 12 dimensioni "canoniche" tipiche delle releases precedenti, con una gestione ottimizzata di query e calcoli in funzione della cardinalità delle dimensioni in gioco arricchimento di Oracle Hyperion Profitability & Cost Management con funzionalità di Detailed Profitability, ovvero la possibilità di implementare modelli di costing e profittabilità in presenza di dimensioni ad altissima cardinalità quali, ad esempio, gli SKU delle industrie Retail e Distribution, i clienti delle Banche Retail e delle Telco, le singole utente delle Utilities. arricchimento di Oracle Hyperion Financial Data Quality Management, in particolare della componente ERP Integrator, con estensione delle integrazioni pre-built verso SAP Financials e JD Edwards Enterprise One Financials introduzione di Oracle Exalytics, il primo engineered system specificatamente progettato per l'In-Memory Analytics che permette di ottenere performance di calcolo e di analisi senza precedenti al crescere dei volumi di dati, delle dimensioni dei modelli e della concorrenza degli utenti, supportando così processi di Business Intelligence, Planning & Budgeting, Cost Allocation sempre più articolati e distribuiti Il prossimo 19 Aprile nella sede Oracle di Cinisello Balsamo (MI) si terrà un evento dove verranno presentate in dettaglio le novità introdotte dalla nuova release dell'EPM System; l'evento sarà replicato il 3 Maggio nella sede Oracle di Roma. L'evento è pubblico e gratuito, chi fosse interessato può registrarsi qui. Per ulteriori informazioni potete fare riferimento alla Press Release Ufficiale Qui potete rivedere l'intervento di Mark Hurd all'Open World sulla Strategia Oracle per il Business Analytics

    Read the article

  • Eventi di specializzazione - Computer Gross 2011

    - by user801018
    Eventi di specializzazione Il prezzo a listino del training è di 2.700 euro a partecipante. Per i nostri Partner che aderiscono a questa iniziativa il costo è di 800 euro* per partecipante. Il numero massimo di partecipanti per ciascuna sessione è di 16 persone. * comprende Voucher per iscriversi all'esame sul sito di Person VUE Per potersi iscrivere il dipendente del Partner deve avere un proprio account sul sito Person VUE. Se non si è creato in precedenza già un account è necessario che si registri almeno 72 ore prima della richiesta di iscrizione all'esame. Importante: il dipendente deve inserire il proprio OPN COMPANY ID affinchè la certificazione sia riconosciuta nell’ambito di OPN SPECIALIZATION PROGRAM. Per iscriverti clicca sulla data di tuo interesse: Codice Corso Data Location D50102GC20 Oracle Database 11g: Administration Workshop I Ed 2 PRV (5 gg) 17 ottobre Milano D58682GC20 Oracle WebLogic Server 11g: Administration Essentials Ed 2 PRV (5 gg) 24 ottobre Roma D63510GC11 Oracle BI 11g R1: Create Analyses and Dashboards Ed 1 (4 gg) 24 ottobre Roma D50079GC20 Oracle Database 11g: Administration Workshop II Ed 2 PRV (5 gg) 28 novembre Milano D58686GC20 Oracle WebLogic Server 11g: Advanced Administration Ed 2 (5 gg) 12 dicembre Milano D53979GC20 Oracle Fusion Middleware 11g: Build Applications with ADF I Ed 2 (5 gg) 09 gennaio Milano D67016GC20 Exadata and Database Machine Administration Workshop Ed 2 PRV (3 gg) 16 gennaio Milano D65160GC10 Oracle Identity Manager 11g: Essentials Ed 1 (4 gg) 06 febbraio Milano D63514GC11 Oracle BI 11g R1: Build Repositories Ed 1 PRV (5 gg) 06 febbraio Roma

    Read the article

  • Come integrare in modo smart processi di vendita e produzione?

    - by Claudia Caramelli-Oracle
    L’innovazione tecnologica ha trasformato il modo in cui i clienti interagiscono con le aziende. Inoltre, gli attuali scenari di mercato richiedono attenzione ed efficacia nella vendita per mantenere massima competitività. Per ottenere le migliori performance di vendita è necessario accelerare e automatizzare i processi di scambio informazioni tra i dipartimenti commerciali e produttivi, minimizzando tempi di attesa per ottenere dati tecnici e autorizzazioni alla fattibilità, riducendo i colli di bottiglia e i possibili errori umani attraverso un processo di controllo e omologazione dell’offerta.Gli sponsor dell’evento ti attendono l'11 giugno presso la prestigiosa sede dell’Unione Industriale di Torino per scoprire come: Ridurre il ciclo di vendita, facendo efficienza sull’intero processo di vendita Minimizzare gli impatti da turnover del personale di vendita Migliorare il value to promise Ottenere una migliore fidelizzazione e soddisfazione dei propri clienti, riducendone lo switching Assistere dal vivo ad una dimostrazione pratica di Oracle, leader mondiale nell’ambito delle soluzioni di CPQ (Configure, Price and Quoting) nell’utilizzo di uno strumento veloce, facile da utilizzare, che permetta una gestione smart della configurazione commerciale dell’offerta B2B anche con l’ausilio di accesso mobile e cruscotti direzionali. Scoprire come altre aziende abbiano adottato con successo queste soluzioni di business. La partecipazione all'evento è gratuita ma con capienza limitata, iscriviti subito per assicurarti la partecipazione: CLICCA QUI per registrarti. Se hai bisogno di maggiori informazioni scrivi a Silvia Valgoi.

    Read the article

  • La Customer Satisfaction non basta più!

    - by Silvia Valgoi
    La partita per la conquista della fedeltà dei clienti si gioca sempre meno sul prodotto e sempre più sul servizio. Dal momento che il consumatore di oggi è molto più evoluto e autonomo nelle scelte, il servizio deve andare ben oltre la classica interazione da Customer Service: deve rappresentare una vera e propria esperienza d’acquisto positiva. Questo è il risultato, che poi è una conferma, di Oracle Customer Experience Index, una ricerca che Oracle ha commissionato alla società LoudHouse la quale ha raccolto le opinioni di 1400 consumatori europei, di cui 200 italiani. Addirittura, l'81% di chi fa acquisti sarebbe disposto a pagare di più per una migliore customer experience. Un risultato non banale che la dice lunga su quanto il consumatore oggi sia evoluto e pretenda molto dall’azienda con la quale sta interagendo. Il 70% di coloro che hanno risposto al questionario afferma che se l’esperienza d’acquisto fosse negativa smetterebbe di rivolgersi a una determinata azienda e il 92% di questi comprerebbe da un concorrente. Ecco perchè il Customer Service non è più sufficiente, l’esperienza d’acquisto deve essere a 360° a partire dall’approccio al sito web per acquisire informazioni, all’analisi delle interazioni sui social media, fino alla consistenza delle informazioni e delle risposte che vengono fornite attraverso tutti i canali sia fisici sia virtuali. Per far questo Oracle ha dato vita a un’insieme di soluzioni che ha chiamato proprio Customer Experience Suite e spaziano dalla creazione di siti web evoluti, alla possibilità di fare Intelligence sui Social Media, alla capacità di creare un proficuo dialogo con i clienti in fase di postvendita. Per leggere il comunicato stampa della ricerca clicca qui   Per approfondire i risultati della ricerca CX Index  clicca qui

    Read the article

  • Personal Software Process (PSP1)

    - by gentoo_drummer
    I'm trying to figure out an exercise but it doesn't really makes to much sense.. I'm not asking someone to provide the solution. just to try and analyse what needs to be done in order to solve this. I'm trying to understand which PSP 1.0 1.1 process I should use. PROBE? Or something else? I would greatly appreciate some help on this one from someone that has experience with the Personal Software Process Methodology.. Here is the question. For the reference case (“code1.c”), the following s/w metrics are provided: man-hours spent in implementation phase (per-module): 2,7 mh/file man-hours spent in testing phase (per-module): 4,3 mh/file estimated number of bugs remaining (per-module): 0,3 errors/function, 4 errors/module (remaining) Based on the corresponding values provided for the reference case, each of the following tasks focus on some s/w metrics to be estimated for the test case (“code2.c”): [25 marks] (estimated) man-hours required in implementation phase (per-module) [8 marks] (estimated) man-hours required in testing phase (per-module) [8 marks] (estimated) number of bugs remaining at the end of testing phase (per-module) [9 marks] Tasks 4 through 6 should use the data provided for the reference case within the context of Personal Software Process level-1 (PSP-1), using them as a single-point historic data log. Specifically, the same s/w metrics are to be estimated for the test case (“code2.c”), using PSP as the basic estimation model. In order to perform the above listed tasks, students are advised to consider all phases of the PSP software development process, especially at levels PSP0 and PSP1. Both cases are to be treated as separate case-studies in the context of classic s/w development.

    Read the article

  • Emailing Service: To or Bcc?

    - by Shelakel
    I'm busy coding a reusable e-mail service for my company. The e-mail service will be doing quite a few things via injection through the strategy pattern (such as handling e-mail send rate throttling, switching between Smtp and AmazonSES or Google AppEngine for e-mail clients when daily quotas are exceeded, send statistics tracking (mostly because it is neccessary in order to stay within quotas) to name a few). Because e-mail sending will need to be throttled and other limitations exist (ex. max recipient quota on AmazonSES limiting recipients to 50 per send), the e-mails typically need to be broken up. From your experience, would it be better to send bulk (multiple recipients per e-mail) or a single e-mail per recipient? The implications of the above would be to send to a 1000 recipients, with a limit of 50 per send, you would send 20 e-mails using BCC in a newsletter scenario. When sending an e-mail per recipient, it would send 1000 e-mails. E-mail sending is asynchronous (due to inherit latency when sending, it's typically only possible to send 5 e-mails per second unless you are using multiple client asynchronously). Edit Just for full disclosure, this service won't be used by or sold to spammers and will as far as possible automatically comply with national and international laws. Closed< Thanks for all the valuable feedback. The concerns regarding compliance towards laws, user experience (generic vs. personalized unsubscribe) and spam regulation via ISP blacklisting does make To the preferred and possibly the only choice when sending system generated e-mails to recipients.

    Read the article

  • Advantage Database Server: slow stored procedure performance.

    - by ie
    I have a question about a performance of stored procedures in the ADS. I created a simple database with the following structure: CREATE TABLE MainTable ( Id INTEGER PRIMARY KEY, Name VARCHAR(50), Value INTEGER ); CREATE UNIQUE INDEX MainTableName_UIX ON MainTable ( Name ); CREATE TABLE SubTable ( Id INTEGER PRIMARY KEY, MainId INTEGER, Name VARCHAR(50), Value INTEGER ); CREATE INDEX SubTableMainId_UIX ON SubTable ( MainId ); CREATE UNIQUE INDEX SubTableName_UIX ON SubTable ( Name ); CREATE PROCEDURE CreateItems ( MainName VARCHAR ( 20 ), SubName VARCHAR ( 20 ), MainValue INTEGER, SubValue INTEGER, MainId INTEGER OUTPUT, SubId INTEGER OUTPUT ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @SubName VARCHAR ( 20 ); DECLARE @MainValue INTEGER; DECLARE @SubValue INTEGER; DECLARE @MainId INTEGER; DECLARE @SubId INTEGER; @MainName = (SELECT MainName FROM __input); @SubName = (SELECT SubName FROM __input); @MainValue = (SELECT MainValue FROM __input); @SubValue = (SELECT SubValue FROM __input); @MainId = (SELECT MAX(Id)+1 FROM MainTable); @SubId = (SELECT MAX(Id)+1 FROM SubTable ); INSERT INTO MainTable (Id, Name, Value) VALUES (@MainId, @MainName, @MainValue); INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (@SubId, @SubName, @MainId, @SubValue); INSERT INTO __output SELECT @MainId, @SubId FROM system.iota; END; CREATE PROCEDURE UpdateItems ( MainName VARCHAR ( 20 ), MainValue INTEGER, SubValue INTEGER ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @MainValue INTEGER; DECLARE @SubValue INTEGER; DECLARE @MainId INTEGER; @MainName = (SELECT MainName FROM __input); @MainValue = (SELECT MainValue FROM __input); @SubValue = (SELECT SubValue FROM __input); @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = @MainName); UPDATE MainTable SET Value = @MainValue WHERE Id = @MainId; UPDATE SubTable SET Value = @SubValue WHERE MainId = @MainId; END; CREATE PROCEDURE SelectItems ( MainName VARCHAR ( 20 ), CalculatedValue INTEGER OUTPUT ) BEGIN DECLARE @MainName VARCHAR ( 20 ); @MainName = (SELECT MainName FROM __input); INSERT INTO __output SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = @MainName; END; CREATE PROCEDURE DeleteItems ( MainName VARCHAR ( 20 ) ) BEGIN DECLARE @MainName VARCHAR ( 20 ); DECLARE @MainId INTEGER; @MainName = (SELECT MainName FROM __input); @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = @MainName); DELETE FROM SubTable WHERE MainId = @MainId; DELETE FROM MainTable WHERE Id = @MainId; END; Actually, the problem I had - even so light stored procedures work very-very slow (about 50-150 ms) relatively to plain queries (0-5ms). To test the performance, I created a simple test (in F# using ADS ADO.NET provider): open System; open System.Data; open System.Diagnostics; open Advantage.Data.Provider; let mainName = "main name #"; let subName = "sub name #"; // INSERT let cmdTextScriptInsert = " DECLARE @MainId INTEGER; DECLARE @SubId INTEGER; @MainId = (SELECT MAX(Id)+1 FROM MainTable); @SubId = (SELECT MAX(Id)+1 FROM SubTable ); INSERT INTO MainTable (Id, Name, Value) VALUES (@MainId, :MainName, :MainValue); INSERT INTO SubTable (Id, Name, MainId, Value) VALUES (@SubId, :SubName, @MainId, :SubValue); SELECT @MainId, @SubId FROM system.iota;"; let cmdTextProcedureInsert = "CreateItems"; // UPDATE let cmdTextScriptUpdate = " DECLARE @MainId INTEGER; @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName); UPDATE MainTable SET Value = :MainValue WHERE Id = @MainId; UPDATE SubTable SET Value = :SubValue WHERE MainId = @MainId;"; let cmdTextProcedureUpdate = "UpdateItems"; // SELECT let cmdTextScriptSelect = " SELECT m.Value * s.Value FROM MainTable m INNER JOIN SubTable s ON m.Id = s.MainId WHERE m.Name = :MainName;"; let cmdTextProcedureSelect = "SelectItems"; // DELETE let cmdTextScriptDelete = " DECLARE @MainId INTEGER; @MainId = (SELECT TOP 1 Id FROM MainTable WHERE Name = :MainName); DELETE FROM SubTable WHERE MainId = @MainId; DELETE FROM MainTable WHERE Id = @MainId;"; let cmdTextProcedureDelete = "DeleteItems"; let cnnStr = @"data source=D:\DB\test.add; ServerType=local; user id=adssys; password=***;"; let cnn = new AdsConnection(cnnStr); try cnn.Open(); let cmd = cnn.CreateCommand(); let parametrize ix prms = cmd.Parameters.Clear(); let addParam = function | "MainName" -> cmd.Parameters.Add(":MainName" , mainName + ix.ToString()) |> ignore; | "SubName" -> cmd.Parameters.Add(":SubName" , subName + ix.ToString() ) |> ignore; | "MainValue" -> cmd.Parameters.Add(":MainValue", ix * 3 ) |> ignore; | "SubValue" -> cmd.Parameters.Add(":SubValue" , ix * 7 ) |> ignore; | _ -> () prms |> List.iter addParam; let runTest testData = let (cmdType, cmdName, cmdText, cmdParams) = testData; let toPrefix cmdType cmdName = let prefix = match cmdType with | CommandType.StoredProcedure -> "Procedure-" | CommandType.Text -> "Script -" | _ -> "Unknown -" in prefix + cmdName; let stopWatch = new Stopwatch(); let runStep ix prms = parametrize ix prms; stopWatch.Start(); cmd.ExecuteNonQuery() |> ignore; stopWatch.Stop(); cmd.CommandText <- cmdText; cmd.CommandType <- cmdType; let startId = 1500; let count = 10; for id in startId .. startId+count do runStep id cmdParams; let elapsed = stopWatch.Elapsed; Console.WriteLine("Test '{0}' - total: {1}; per call: {2}ms", toPrefix cmdType cmdName, elapsed, Convert.ToInt32(elapsed.TotalMilliseconds)/count); let lst = [ (CommandType.Text, "Insert", cmdTextScriptInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]); (CommandType.Text, "Update", cmdTextScriptUpdate, ["MainName"; "MainValue"; "SubValue"]); (CommandType.Text, "Select", cmdTextScriptSelect, ["MainName"]); (CommandType.Text, "Delete", cmdTextScriptDelete, ["MainName"]) (CommandType.StoredProcedure, "Insert", cmdTextProcedureInsert, ["MainName"; "SubName"; "MainValue"; "SubValue"]); (CommandType.StoredProcedure, "Update", cmdTextProcedureUpdate, ["MainName"; "MainValue"; "SubValue"]); (CommandType.StoredProcedure, "Select", cmdTextProcedureSelect, ["MainName"]); (CommandType.StoredProcedure, "Delete", cmdTextProcedureDelete, ["MainName"])]; lst |> List.iter runTest; finally cnn.Close(); And I'm getting the following results: Test 'Script -Insert' - total: 00:00:00.0292841; per call: 2ms Test 'Script -Update' - total: 00:00:00.0056296; per call: 0ms Test 'Script -Select' - total: 00:00:00.0051738; per call: 0ms Test 'Script -Delete' - total: 00:00:00.0059258; per call: 0ms Test 'Procedure-Insert' - total: 00:00:01.2567146; per call: 125ms Test 'Procedure-Update' - total: 00:00:00.7442440; per call: 74ms Test 'Procedure-Select' - total: 00:00:00.5120446; per call: 51ms Test 'Procedure-Delete' - total: 00:00:01.0619165; per call: 106ms The situation with the remote server is much better, but still a great gap between plaqin queries and stored procedures: Test 'Script -Insert' - total: 00:00:00.0709299; per call: 7ms Test 'Script -Update' - total: 00:00:00.0161777; per call: 1ms Test 'Script -Select' - total: 00:00:00.0258113; per call: 2ms Test 'Script -Delete' - total: 00:00:00.0166242; per call: 1ms Test 'Procedure-Insert' - total: 00:00:00.5116138; per call: 51ms Test 'Procedure-Update' - total: 00:00:00.3802251; per call: 38ms Test 'Procedure-Select' - total: 00:00:00.1241245; per call: 12ms Test 'Procedure-Delete' - total: 00:00:00.4336334; per call: 43ms Is it any chance to improve the SP performance? Please advice. ADO.NET driver version - 9.10.2.9 Server version - 9.10.0.9 (ANSI - GERMAN, OEM - GERMAN) Thanks!

    Read the article

  • How can I implement NHibernate session per request without a dependency on NHibernate?

    - by Ben
    I've raised this question before but am still struggling to find an example that I can get my head around (please don't just tell me to look at the S#arp Architecture project without at least some directions). So far I have achieved near persistance ignorance in my web project. My repository classes (in my data project) take an ISession in the constructor: public class ProductRepository : IProductRepository { private ISession _session; public ProductRepository(ISession session) { _session = session; } In my global.asax I expose the current session and am creating and disposing session on beginrequest and endrequest (this is where I have the dependency on NHibernate): public static ISessionFactory SessionFactory = CreateSessionFactory(); private static ISessionFactory CreateSessionFactory() { return new Configuration() .Configure() .BuildSessionFactory(); } protected MvcApplication() { BeginRequest += delegate { CurrentSessionContext.Bind(SessionFactory.OpenSession()); }; EndRequest += delegate { CurrentSessionContext.Unbind(SessionFactory).Dispose(); }; } And finally my StructureMap registry: public AppRegistry() { For<ISession>().TheDefault .Is.ConstructedBy(x => MvcApplication.SessionFactory.GetCurrentSession()); For<IProductRepository>().Use<ProductRepository>(); } It would seem I need my own generic implementations of ISession and ISessionFactory that I can use in my web project and inject into my repositories? I'm a little stuck so any help would be appreciated. Thanks, Ben

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • Configure Apache to use different Unix User Accounts (www-data) per Site.

    - by BrainCore
    An Apache 2.x Webserver with default configurations from the ubuntu/debian repositories will use the www-data unix account for apache2 processes handling web requests. Assuming that apache is serving two different sites (domain1.com and domain2.com), is it possible for apache to use unix user www-data1 when handling requests to domain1.com, and use unix user www-data2 when handling requests to domain2.com? The motivation is to isolate the code for each domain name from one another.

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

  • how to configure Postfix to send more emails per hour than the default.

    - by dina-ak
    Hello; My postfix only let me send only 3600 email in an hour ( from which i conclude that there is 1s delay between each email ) while I want to send double that number .. I looked in the postfix configuration .Is there any parameters that i can change to send more than 3600 email in an hour ? this is the output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases bounce_queue_lifetime = 1d command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 default_destination_concurrency_limit = 5 default_destination_rate_delay = 0s html_directory = no inet_interfaces = all inet_protocols = ipv4 initial_destination_concurrency = 2 lmtp_destination_rate_delay = 0s local_destination_rate_delay = 0s mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man maximal_queue_lifetime = 1d mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = example.com myhostname = server01.example.com myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix qmgr_message_recipient_limit = 10000 queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.5.6/README_FILES relay_destination_rate_delay = 0s sample_directory = /usr/share/doc/postfix-2.5.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_bind_address = xxx.xxx.xxx.xxx smtp_destination_rate_delay = 0s smtp_generic_maps = hash:/etc/postfix/generic smtpd_banner = $myhostname ESMTP $mail_name smtpd_client_restrictions = check_client_access hash:/etc/postfix/access unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual virtual_destination_rate_delay = 0s

    Read the article

  • Flex HttpService POST limited to 543 Byte per Form field?

    - by motto
    Hi, I am getting a FaultEvent when trying to send form fields through HTTPService that contain more than 542 chars. Initializing the HttpService: httpServ = new HTTPService(); httpServ.method = 'POST'; httpServ.url = ENDPOINT_URL; //http://localhost:3001/ReportError.aspx httpServ.resultFormat = HTTPService.RESULT_FORMAT_TEXT; httpServ.contentType = HTTPService.CONTENT_TYPE_FORM; httpServ.addEventListener(ResultEvent.RESULT, OnErrorSent); httpServ.addEventListener(FaultEvent.FAULT, OnFault); Sending the request: var params:Object = {}; //params["stack"] = e.stackTrace.slice(0, 542); //length 542 = works //params["stack2"] = e.stackTrace.slice(1, 543); //length 542 = works (just to show that it's not about the content itself) params["stack3"] = e.stackTrace.slice(0, 543); //length 543 = fails I also seem to be able to create many form fields (with 542 length) so that it's not a limit of the request itself but of the form field: var params:Object = {}; params["stack"] = e.stackTrace.slice(0, 542); //length 542 params["stack2"] = e.stackTrace.slice(1, 543); //length 542 params["stack3"] = e.stackTrace.slice(2, 544); //length 542 // Length > 1600 chars The receiving party is an ASP.NET 4 site on the same domain and port. I hope someone already came across a similar restrictions or has some general advice on how to trace this problem down further. Thanks in advance.

    Read the article

  • How can you toggle between two sets of values per data series in flot?

    - by Jedidja
    flot has built-in support for multiple data series (sample code) and also dual-axis (sample code). Assuming multiple data series (water, electricity, etc) that each have an amount (usage) and a dollar value (charge for that usage), what would the best way be to to use flot to display either the amount or dollar values for all the data series, while still supporting toggling display for each individual series? The idea is to send down all the data in one GET request and then let the client take care of everything else in Javascript. Ideally we could use triplets somehow {date, amount, charge}, and then possibly split that into two arrays for flot.

    Read the article

  • Would this hack for per-object permissions in django work?

    - by Edward
    According to the documentation, a class can have the meta option permissions, described as such: Options.permissions Extra permissions to enter into the permissions table when creating this object. Add, delete and change permissions are automatically created for each object that has admin set. This example specifies an extra permission, can_deliver_pizzas: permissions = (("can_deliver_pizzas", "Can deliver pizzas"),) This is a list or tuple of 2-tuples in the format (permission_code, human_readable_permission_name). Would it be possible to define permissions at run time by: permissions = (("can_access_%s" % self.pk, / "Has access to object %s of type %s" % (self.pk,self.__name__)),) ?

    Read the article

  • How can I include an external JavaScript file exactly once per partial view?

    - by AaronSieb
    I have a partial view (UserControl) that implements a simple pager in my Asp.Net MVC project. This pager needs access to a .js file that needs to be included exactly once, regardless of how many instances of the pager control are present on the parent page. I've tried to use Page.ClientScript.RegisterClientScriptInclude, but it had no effect (I assume because the code nugget was evaluated too late to impact the head control). Is there any simple alternative?

    Read the article

  • How do I get the number of objects per day using django?

    - by Keith
    I have a django model with a DateTimeField. class Point(models.Model): somedata = models.CharField(max_length=256) time = models.DateTimeField() I want to get a count of the number of these objects for each day. I can do this with the following SQL query, but don't know how to do it through django. SELECT DATE(`time`), Count(*) FROM `app_point` GROUP BY DATE(`time`) Being able to restrict the results to a date range would also be good.

    Read the article

  • CakePHP: Same model, set up per form validation rules?

    - by mwaterous
    I have a single model in CakePHP that has multiple forms on different pages of the site that I would like to validate differently even where the field name is the same - I have discovered that you can set 'on' to create|update which has been a handy discovery but I am wondering if there is any other way of explicitly declaring rules based on the form that was submitted? Just to rephrase for clarity, form a and form b contain fields of the same name, but if form a is submitted the fields in question should be validated differently than if they were submitted from form b. Possible?

    Read the article

  • can I have more than one route per action on select parameters?

    - by zsharp
    my action has two 3 parameters, but only two are called at a time. So I want to do this: People is the action, string Height, string searchHigh, sting searchLow /Groups/People/Tall/searchHigh and this /Groups/People/Short/searchLow i map both and the first route works, but the second gets appended to the first when go to the short tab.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >