Search Results

Search found 4291 results on 172 pages for 'cluster analysis'.

Page 26/172 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • I need advice about iscsi + zfs(or ntfs) + windows 2008 clustering

    - by Fatih
    I want to setup a storage farm with iSCSI. I have 2 cluster node machine, 1 iscsi target machine that has 8TB installed as RAID 10. The capacity is now 8TB, but I'll upgrade the capacity in future. Let's say, I installed clusters as file server, and I connected these servers to iscsi target, then I shared 8TB capacity as an only folder to the windows users. Users now see only a folder whose capacity is 8TB. But if I want to add another 8TB to expand the main capacity, the users must not see the second folder for this new 8 TB. The users must see only a folder as before, but this time this folder's capacity expanded to 16TB. And so on, if I add another 8TB, the users must deal with only a folder. For this purpose, I've learnt that ZFS can expand its size without a problem. So if I use ZFS as a file system on iSCSI luns, how can the cluster machines see the ZFS. Because the cluster machines have windows 2008. Is there another way to expand the size of shared folder without a problem? Does ntfs support it?

    Read the article

  • Programmaticaly finding the Landau notation (Big O or Theta notation) of an algorithm?

    - by Julien L
    I'm used to search for the Landau (Big O, Theta...) notation of my algorithms by hand to make sure they are as optimized as they can be, but when the functions are getting really big and complex, it's taking way too much time to do it by hand. it's also prone to human errors. I spent some time on Codility (coding/algo exercises), and noticed they will give you the Landau notation for your submitted solution (both in Time and Memory usage). I was wondering how they do that... How would you do it? Is there another way besides Lexical Analysis or parsing of the code? PS: This question concerns mainly PHP and or JavaScript, but I'm opened to any language and theory.

    Read the article

  • Is catching general exceptions really a bad thing?

    - by Bob Horn
    I typically agree with most code analysis warnings, and I try to adhere to them. However, I'm having a harder time with this one: CA1031: Do not catch general exception types I understand the rationale for this rule. But, in practice, if I want to take the same action regardless of the exception thrown, why would I handle each one specifically? Furthermore, if I handle specific exceptions, what if the code I'm calling changes to throw a new exception in the future? Now I have to change my code to handle that new exception. Whereas if I simply caught Exception my code doesn't have to change. For example, if Foo calls Bar, and Foo needs to stop processing regardless of the type of exception thrown by Bar, is there any advantage in being specific about the type of exception I'm catching?

    Read the article

  • Why use sealed instead of static on a class?

    - by sq33G
    Our system has several utility classes. Some people on our team use (A) a class with all-static methods and a private constructor. Others use (B) a class with all-static methods (these the juniors). On code analysis, (A) and (B) raise warning CA1052, which recommends marking the class as sealed. Included in the MSDN documentation there is the following advice: If you are targeting .NET Framework 2.0 or earlier, a better approach is to mark the type as static. Why does this make any sense? I would have thought the opposite; AFAIK, previous to 2.0 there was no way to mark a class as static.

    Read the article

  • Clustered MSDTC

    - by niel
    Hi I'm setting up a SQL cluster (SQL 2008), Windows 2008 R2. I enable the network access on local dtc and then create a DTC resource in my cluster . the problem is that when i start up the resource it does nto pull through my settings to enable network access. the log shows this: MSDTC started with the following settings: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 0, Trasaction Manager Communication: Allow Inbound Transactions = 0, Allow Outbound Transactions = 0, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 0, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = Mutual Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 where when i restart the local dtc service it says this: Security Configuration (OFF = 0 and ON = 1): Allow Remote Administrator = 0, Network Clients = 1, Trasaction Manager Communication: Allow Inbound Transactions = 1, Allow Outbound Transactions = 1, Transaction Internet Protocol (TIP) = 0, Enable XA Transactions = 1, Enable SNA LU 6.2 Transactions = 1, MSDTC Communications Security = No Authentication Required, Account = NT AUTHORITY\NetworkService, Firewall Exclusion Detected = 0 Transaction Bridge Installed = 0 Filtering Duplicate Events = 1 settings on both nodes in teh cluster is the same. I have reinstalled and restarted to many times to mention. Any ideas ?

    Read the article

  • Development methodology for single web developer?

    - by CaseTA
    I'm a web developer who mostly works with the LAMP stack when it comes to my own projects. Most of the time I just start coding on a project and fixing bugs and adding features as I go along. Often I'll try to use an existing solution such as Wordpress or Drupal. Now that I'm thinking of creating my own web application with businesses as the target group, I feel there's a need for proper analysis and design. Something lightweight for a one person project and still solid enough to handle requirements, user interfaces, security, etc. If you could recommend methodologies and literature I would be grateful.

    Read the article

  • a question on using Microsoft Excel to do data analysis

    - by Gemma
    Hello everyone there, Me and my mates have been given an Excel spreasheet,which contains the data of the Census report. The column here corresponds to "Year" (from 2000-2010),the row means "City/Town",the cell is simply the population of a town at a particular year. We are up to doing some analysis like “what town had its first population gain in 10 years?” etc.My question is just about can we do this in Excel,or do we need to export the data to other database(SQL) then do the programming? Thanks in advance.

    Read the article

  • Python: Improving long cumulative sum

    - by Bo102010
    I have a program that operates on a large set of experimental data. The data is stored as a list of objects that are instances of a class with the following attributes: time_point - the time of the sample cluster - the name of the cluster of nodes from which the sample was taken code - the name of the node from which the sample was taken qty1 = the value of the sample for the first quantity qty2 = the value of the sample for the second quantity I need to derive some values from the data set, grouped in three ways - once for the sample as a whole, once for each cluster of nodes, and once for each node. The values I need to derive depend on the (time sorted) cumulative sums of qty1 and qty2: the maximum value of the element-wise sum of the cumulative sums of qty1 and qty2, the time point at which that maximum value occurred, and the values of qty1 and qty2 at that time point. I came up with the following solution: dataset.sort(key=operator.attrgetter('time_point')) # For the whole set sys_qty1 = 0 sys_qty2 = 0 sys_combo = 0 sys_max = 0 # For the cluster grouping cluster_qty1 = defaultdict(int) cluster_qty2 = defaultdict(int) cluster_combo = defaultdict(int) cluster_max = defaultdict(int) cluster_peak = defaultdict(int) # For the node grouping node_qty1 = defaultdict(int) node_qty2 = defaultdict(int) node_combo = defaultdict(int) node_max = defaultdict(int) node_peak = defaultdict(int) for t in dataset: # For the whole system ###################################################### sys_qty1 += t.qty1 sys_qty2 += t.qty2 sys_combo = sys_qty1 + sys_qty2 if sys_combo > sys_max: sys_max = sys_combo # The Peak class is to record the time point and the cumulative quantities system_peak = Peak(time_point=t.time_point, qty1=sys_qty1, qty2=sys_qty2) # For the cluster grouping ################################################## cluster_qty1[t.cluster] += t.qty1 cluster_qty2[t.cluster] += t.qty2 cluster_combo[t.cluster] = cluster_qty1[t.cluster] + cluster_qty2[t.cluster] if cluster_combo[t.cluster] > cluster_max[t.cluster]: cluster_max[t.cluster] = cluster_combo[t.cluster] cluster_peak[t.cluster] = Peak(time_point=t.time_point, qty1=cluster_qty1[t.cluster], qty2=cluster_qty2[t.cluster]) # For the node grouping ##################################################### node_qty1[t.node] += t.qty1 node_qty2[t.node] += t.qty2 node_combo[t.node] = node_qty1[t.node] + node_qty2[t.node] if node_combo[t.node] > node_max[t.node]: node_max[t.node] = node_combo[t.node] node_peak[t.node] = Peak(time_point=t.time_point, qty1=node_qty1[t.node], qty2=node_qty2[t.node]) This produces the correct output, but I'm wondering if it can be made more readable/Pythonic, and/or faster/more scalable. The above is attractive in that it only loops through the (large) dataset once, but unattractive in that I've essentially copied/pasted three copies of the same algorithm. To avoid the copy/paste issues of the above, I tried this also: def find_peaks(level, dataset): def grouping(object, attr_name): if attr_name == 'system': return attr_name else: return object.__dict__[attrname] cuml_qty1 = defaultdict(int) cuml_qty2 = defaultdict(int) cuml_combo = defaultdict(int) level_max = defaultdict(int) level_peak = defaultdict(int) for t in dataset: cuml_qty1[grouping(t, level)] += t.qty1 cuml_qty2[grouping(t, level)] += t.qty2 cuml_combo[grouping(t, level)] = (cuml_qty1[grouping(t, level)] + cuml_qty2[grouping(t, level)]) if cuml_combo[grouping(t, level)] > level_max[grouping(t, level)]: level_max[grouping(t, level)] = cuml_combo[grouping(t, level)] level_peak[grouping(t, level)] = Peak(time_point=t.time_point, qty1=node_qty1[grouping(t, level)], qty2=node_qty2[grouping(t, level)]) return level_peak system_peak = find_peaks('system', dataset) cluster_peak = find_peaks('cluster', dataset) node_peak = find_peaks('node', dataset) For the (non-grouped) system-level calculations, I also came up with this, which is pretty: dataset.sort(key=operator.attrgetter('time_point')) def cuml_sum(seq): rseq = [] t = 0 for i in seq: t += i rseq.append(t) return rseq time_get = operator.attrgetter('time_point') q1_get = operator.attrgetter('qty1') q2_get = operator.attrgetter('qty2') timeline = [time_get(t) for t in dataset] cuml_qty1 = cuml_sum([q1_get(t) for t in dataset]) cuml_qty2 = cuml_sum([q2_get(t) for t in dataset]) cuml_combo = [q1 + q2 for q1, q2 in zip(cuml_qty1, cuml_qty2)] combo_max = max(cuml_combo) time_max = timeline.index(combo_max) q1_at_max = cuml_qty1.index(time_max) q2_at_max = cuml_qty2.index(time_max) However, despite this version's cool use of list comprehensions and zip(), it loops through the dataset three times just for the system-level calculations, and I can't think of a good way to do the cluster-level and node-level calaculations without doing something slow like: timeline = defaultdict(int) cuml_qty1 = defaultdict(int) #...etc. for c in cluster_list: timeline[c] = [time_get(t) for t in dataset if t.cluster == c] cuml_qty1[c] = [q1_get(t) for t in dataset if t.cluster == c] #...etc. Does anyone here at Stack Overflow have suggestions for improvements? The first snippet above runs well for my initial dataset (on the order of a million records), but later datasets will have more records and clusters/nodes, so scalability is a concern. This is my first non-trivial use of Python, and I want to make sure I'm taking proper advantage of the language (this is replacing a very convoluted set of SQL queries, and earlier versions of the Python version were essentially very ineffecient straight transalations of what that did). I don't normally do much programming, so I may be missing something elementary. Many thanks!

    Read the article

  • Running Awk command on a cluster

    - by alex
    How do you execute a Unix shell command (awk script, a pipe etc) on a cluster in parallel (step 1) and collect the results back to a central node (step 2) Hadoop seems to be a huge overkill with its 600k LOC and its performance is terrible (takes minutes just to initialize the job) i don't need shared memory, or - something like MPI/openMP as i dont need to synchronize or share anything, don't need a distributed VM or anything as complex Google's SawZall seems to work only with Google proprietary MapReduce API some distributed shell packages i found failed to compile, but there must be a simple way to run a data-centric batch job on a cluster, something as close as possible to native OS, may be using unix RPC calls i liked rsync simplicity but it seem to update remote notes sequentially, and you cant use it for executing scripts as afar as i know switching to Plan 9 or some other network oriented OS looks like another overkill i'm looking for a simple, distributed way to run awk scripts or similar - as close as possible to data with a minimal initialization overhead, in a nothing-shared, nothing-synchronized fashion Thanks Alex

    Read the article

  • Best ORM, Simple data Structures, Strong Query analysis.

    - by sayth
    What is the best ORM db combination for simple data structures. That is data that contains names as identifiers and locations, but whose main interaction will be numerical data for times(sports durations), and currency related data. I initially want to create a sports data base that will take names and statistics. Secondarily I plan to start into an investment and stock analysis db. Which ORM suits storing many numerical types and have strong query functions? I really am not biased to db engine (most likely use sqlite or mongo) so any suggestions to best network less db server to suit said ORM appreciated.

    Read the article

  • Google maps cluster manager

    - by Prashant
    Hello all I am using google maps in my application. I have to show 100 markers on map. First I prepared a markers array from these markers. When markers are added using addOverlay from markers array, it takes some time and they are being added in some animated way (in sequence). I want all of them to get added to map in a single shot, so no flickering effect. I tried MarkerClusterer but it shows a cluster of markers where the need be. Instead I want all the markers to appear, not a cluster. Only they should be added faster. var point = new GLatLng(latArr[i],lonArr[i]); var marker = new GMarker(point,markerOptions); markers[i] = marker; var markerCluster = new MarkerClusterer(map, markers); Any suggestions please? Thank you.

    Read the article

  • Matlab: Analysis of signal

    - by Mateusz
    Hi, I have a problem with this task: For free route perform frequency analysis and give parametrs of each signal component: time of beginning and ending of each component beginning and ending frequency amplitude (in time domain) in the beginning and end of each signal's component level of noise in dB Assume, that, the parametrs of each component like amplitude, frequency is changing lineary in time. Frequency of sampling is 1000Hz For example I have signal like this: Nx=64; fs=1000; t=1/fs*(0:Nx-1); %========================== A1=1; A2=4; f1=500; f2=1000; x1=A1*cos(2*pi*f1*t); x2=A2*sin(2*pi*f2*t); %========================== x=x1+x2;

    Read the article

  • SMO some times doesn't display the instances in sql2008 cluster

    - by Cute
    Hi I have used SMO API.in that i have used SmoApplication.EnumAvailableServers(FALSE) and from that i have filtered local instances i have used this approch insted of true to make this as convinent for remote sqldiscovery also.using that api created a dll and use that dll in c++. Now this is working in all combinations but some times it is failed to retrieve the instannces in win2008 sql2008 cluster combination. if i run the exe for 5 times it got succeed for 3 times and failed for two times... What is wromg with win-sql2008 cluster .is there any additional changes needed to make it work properrly.My firewall is off and also added exception for tcp port 1433. Anyy help is greately Appreciated... Thanks in Advance.

    Read the article

  • What threading analysis tools do you recommend?

    - by glutz78
    My primary IDE is Visual Studio 2005 and I have a large C/C++ project. I'm interested in what thread analysis tools are recommended. By that I mean, I want a tool, static or dynamic, to help find race conditions, deadlocks, and the like. So far I've casually researched the following: 1. Intel Thread Checker: I don't believe that it ties into VS 2005? 2. Valgrind/Helgrind: free. 3. Coverity: this is a costly tool if i understand correctly. Anyone have experience with any of these or other? I'd much appreciate any advice. Thank you.

    Read the article

  • Cluster analysis on two columns that contain name of person in R

    - by Alka Shah
    I am a beginner in R. I have to do cluster analysis in data that contains two columns with name of persons. I converted it in data frame but it is character type. To use dist() function the data frame must be numeric. example of my data: Interviewed.Type interviewed.Relation.Type 1. An1 Xuan 2. An2 The 3. An3 Ngoc 4. Bui Thi 5. ANT feed 7. Bach Thi 8. Gian1 Thi 9. Lan5 Thi . . . 1100. Xung Van I will be grateful for your help.

    Read the article

  • Determine cluster size of file system in Python

    - by Philip Fourie
    I would like to calculate the "size on disk" of a file in Python. Therefore I would like to determine the cluster size of the file system where the file is stored. How do I determine the cluster size in Python? Or another built-in method that calculates the "size on disk" will also work. I looked at os.path.getsize but it returns the file size in bytes, not taking the FS's block size into consideration. I am hoping that this can be done in an OS independent way...

    Read the article

  • (Visual) C++ project dependency analysis

    - by polyglot
    I have a few large projects I am working on in my new place of work, which have a complicated set of statically linked library dependencies between them. The libs number around 40-50 and it's really hard to determine what the structure was initially meant to be, there isn't clear documentation on the full dependency map. What tools would anyone recommend to extract such data? Presumably, in the simplest manner, if did the following: define the set of paths which correspond to library units set all .cpp/.h files within those to belong to those compilation units capture the 1st order #include dependency tree One would have enough information to compose a map - refactor - and recompose the map, until one has created some order. I note that http://www.ndepend.com have something nice but that's exclusively .NET unfortunately. I read something about Doxygen being able accomplish some static dependency analysis with configuration; has anyone ever pressed it into service to accomplish such a task?

    Read the article

  • Tool for response time analysis on JBoss server?

    - by Ariel Vardi
    I am running a pretty high traffic cluster of JBoss servers serving REST requests and I am interested in tools reading the access logs in Tomcat format (with %D parameter) to provide a detailed analysis of the response time on a per-call basis. Ideally this tool would generate a chart showing the progression of the response time throughout the day, hour per hour, then a weekly view with averages on the day, and monthly with average on the weeks (CACTI style). I've looked for such tools and couldn't find anything. Is any of you guys aware of something close to that before I start writing my own? I haven't looked into CACTI extensions yet, but that be an option?

    Read the article

  • Matlab Question - Principal Component Analysis

    - by Jack
    I have a set of 100 observations where each observation has 45 characteristics. And each one of those observations have a label attached which I want to predict based on those 45 characteristics. So it's an input matrix with the dimension 45 x 100 and a target matrix with the dimension 1 x 100. The thing is that I want to know how many of those 45 characteristics are relevant in my set of data, basically the principal component analysis, and I understand that I can do this with Matlab function processpca. Could you please tell me how can I do this? Suppose that the input matrix is x with 45 rows and 100 columns and y is a vector with 100 elements.

    Read the article

  • SAS(Statistical Analysis System) Career As a computer science student

    - by Renju
    Hi. I have completed MSc in Computer science this academic year. So I am fresher... While I am doing graduation and post graduation I did many projects using PHP and MySQL. Now I got opportunity to get into SAS(Statistical Analysis System) career, and I heard that SAS having better career growth than PHP developement. For the past 4 days, I was working with SAS and I feel interested in working. My questions are, Since i am a computer science student whether i will have any problem in my career growth in SAS? I am ready to learn statistics also, is there anything else I have to do? Doing certification in SAS will make any changes? Is it a bad idea to get into SAS with only CSc backgrond? So please guide me for my career...

    Read the article

  • Any recommended VC++ settings for better PDB analysis on release builds

    - by Brian R. Bondy
    Are there any VC++ settings I should know about to generate better PDB files that contain more information? I have a crash dump analysis system in place based on the project crashrpt. Also, my production build server has the source code installed on the D:\, but my development machine has the source code on the C:\. I entered the source path in the VC++ settings, but when looking through the call stack of a crash, it doesn't automatically jump to my source code. I believe if I had my dev machine's source code on the D:\ it would work.

    Read the article

  • Clustering on WebLogic exception on Failover

    - by Markos Fragkakis
    Hi all, I deploy an application on a WebLogic 10.3.2 cluster with two nodes, and a load balancer in front of the cluster. I have set the <core:init distributable="true" debug="true" /> My Session and Conversation classes implement Serializable. I start using the application being served by the first node. The console shows that the session replication is working. <Jun 17, 2010 11:43:50 AM EEST> <Info> <Cluster> <BEA-000128> <Updating 5903057688359791237S:xxx.yyy.gr:[7002,7002,-1,-1,-1,-1,-1]:xxx.yyy.gr:7002,xxx.yyy.gr:7002:prs_domain:PRS_Server_2 in the cluster.> <Jun 17, 2010 11:43:50 AM EEST> <Info> <Cluster> <BEA-000128> <Updating 5903057688359791237S:xxx.yyy.gr:[7002,7002,-1,-1,-1,-1,-1]:xxx.yyy.gr:7002,xxx.yyy.gr:7002:prs_domain:PRS_Server_2 in the cluster.> When I shutdown the first node from the Administration console, I get this in the other node: <Jun 17, 2010 11:23:46 AM EEST> <Error> <Kernel> <BEA-000802> <ExecuteRequest failed java.lang.NullPointerException. java.lang.NullPointerException at org.jboss.seam.intercept.JavaBeanInterceptor.callPostActivate(JavaBeanInterceptor.java:165) at org.jboss.seam.intercept.JavaBeanInterceptor.invoke(JavaBeanInterceptor.java:73) at com.myproj.beans.SortingFilteringBean_$$_javassist_seam_2.sessionDidActivate(SortingFilteringBean_$$_javassist_seam_2.java) at weblogic.servlet.internal.session.SessionData.notifyActivated(SessionData.java:2258) at weblogic.servlet.internal.session.SessionData.notifyActivated(SessionData.java:2222) at weblogic.servlet.internal.session.ReplicatedSessionData.becomePrimary(ReplicatedSessionData.java:231) at weblogic.cluster.replication.WrappedRO.changeStatus(WrappedRO.java:142) at weblogic.cluster.replication.WrappedRO.ensureStatus(WrappedRO.java:129) at weblogic.cluster.replication.LocalSecondarySelector$ChangeSecondaryInfo.run(LocalSecondarySelector.java:542) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) > What am I doing wrong? This is the SortingFilteringBean: import java.util.HashMap; import java.util.LinkedHashMap; import org.jboss.seam.ScopeType; import org.jboss.seam.annotations.Name; import org.jboss.seam.annotations.Scope; import com.myproj.model.crud.Filtering; import com.myproj.model.crud.Sorting; import com.myproj.model.crud.SortingOrder; /** * Managed bean aggregating the sorting and filtering values for all the * application's lists. A light-weight bean to always keep in the session with * minimum impact. */ @Name("sortingFilteringBean") @Scope(ScopeType.SESSION) public class SortingFilteringBean extends BaseManagedBean { private static final long serialVersionUID = 1L; private Sorting applicantProductListSorting; private Filtering applicantProductListFiltering; private Sorting homePageSorting; private Filtering homePageFiltering; /** * Creates a new instance of SortingFilteringBean. */ public SortingFilteringBean() { // ********************** // Applicant Product List // ********************** // Sorting LinkedHashMap<String, SortingOrder> applicantProductListSortingValues = new LinkedHashMap<String, SortingOrder>(); applicantProductListSortingValues.put("applicantName", SortingOrder.ASCENDING); applicantProductListSortingValues.put("applicantEmail", SortingOrder.ASCENDING); applicantProductListSortingValues.put("productName", SortingOrder.ASCENDING); applicantProductListSortingValues.put("productEmail", SortingOrder.ASCENDING); applicantProductListSorting = new Sorting( applicantProductListSortingValues); // Filtering HashMap<String, String> applicantProductListFilteringValues = new HashMap<String, String>(); applicantProductListFilteringValues.put("applicantName", ""); applicantProductListFilteringValues.put("applicantEmail", ""); applicantProductListFilteringValues.put("productName", ""); applicantProductListFilteringValues.put("productEmail", ""); applicantProductListFiltering = new Filtering( applicantProductListFilteringValues); // ********* // Home page // ********* // Sorting LinkedHashMap<String, SortingOrder> homePageSortingValues = new LinkedHashMap<String, SortingOrder>(); homePageSortingValues.put("productName", SortingOrder.ASCENDING); homePageSortingValues.put("productId", SortingOrder.ASCENDING); homePageSortingValues.put("productAtcCode", SortingOrder.UNSORTED); homePageSortingValues.put("productEmaNumber", SortingOrder.UNSORTED); homePageSortingValues.put("productOrphan", SortingOrder.UNSORTED); homePageSortingValues.put("productRap", SortingOrder.UNSORTED); homePageSortingValues.put("productCorap", SortingOrder.UNSORTED); homePageSortingValues.put("applicationTypeDescription", SortingOrder.ASCENDING); homePageSortingValues.put("applicationId", SortingOrder.ASCENDING); homePageSortingValues .put("applicationEmaNumber", SortingOrder.UNSORTED); homePageSortingValues .put("piVersionImportDate", SortingOrder.ASCENDING); homePageSortingValues.put("piVersionId", SortingOrder.ASCENDING); homePageSorting = new Sorting(homePageSortingValues); // Filtering HashMap<String, String> homePageFilteringValues = new HashMap<String, String>(); homePageFilteringValues.put("productName", ""); homePageFilteringValues.put("productAtcCode", ""); homePageFilteringValues.put("productEmaNumber", ""); homePageFilteringValues.put("applicationTypeId", ""); homePageFilteringValues.put("applicationEmaNumber", ""); homePageFilteringValues.put("piVersionImportDate", ""); homePageFiltering = new Filtering(homePageFilteringValues); } /** * @return the applicantProductListFiltering */ public Filtering getApplicantProductListFiltering() { return applicantProductListFiltering; } /** * @param applicantProductListFiltering * the applicantProductListFiltering to set */ public void setApplicantProductListFiltering( Filtering applicantProductListFiltering) { this.applicantProductListFiltering = applicantProductListFiltering; } /** * @return the applicantProductListSorting */ public Sorting getApplicantProductListSorting() { return applicantProductListSorting; } /** * @param applicantProductListSorting * the applicantProductListSorting to set */ public void setApplicantProductListSorting( Sorting applicantProductListSorting) { this.applicantProductListSorting = applicantProductListSorting; } /** * @return the homePageSorting */ public Sorting getHomePageSorting() { return homePageSorting; } /** * @param homePageSorting * the homePageSorting to set */ public void setHomePageSorting(Sorting homePageSorting) { this.homePageSorting = homePageSorting; } /** * @return the homePageFiltering */ public Filtering getHomePageFiltering() { return homePageFiltering; } /** * @param homePageFiltering * the homePageFiltering to set */ public void setHomePageFiltering(Filtering homePageFiltering) { this.homePageFiltering = homePageFiltering; } /** * For convenience to view in the Seam Debug page. * * @see java.lang.Object#toString() */ @Override public String toString() { StringBuilder sb = new StringBuilder(""); sb.append("\n\n"); sb.append("applicantProductListSorting"); sb.append(applicantProductListSorting); sb.append("\n\n"); sb.append("applicantProductListFiltering"); sb.append(applicantProductListFiltering); sb.append("\n\n"); sb.append("homePageSorting"); sb.append(homePageSorting); sb.append("\n\n"); sb.append("homePageFiltering"); sb.append(homePageFiltering); return sb.toString(); } } And this is the BaseManagedBean, inheriting the AbstractMutable. import java.io.IOException; import java.io.OutputStream; import java.util.List; import javax.faces.application.FacesMessage; import javax.faces.application.FacesMessage.Severity; import javax.faces.context.FacesContext; import javax.servlet.http.HttpServletResponse; import org.apache.commons.lang.ArrayUtils; import org.jboss.seam.core.AbstractMutable; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.myproj.common.exceptions.WebException; import com.myproj.common.util.FileUtils; import com.myproj.common.util.StringUtils; import com.myproj.web.messages.Messages; public abstract class BaseManagedBean extends AbstractMutable { private static final Logger logger = LoggerFactory .getLogger(BaseManagedBean.class); private FacesContext facesContext; /** * Set a message to be displayed for a specific component. * * @param resourceBundle * the resource bundle where the message appears. Either base or * id may be used. * @param summaryResourceId * the id of the resource to be used as summary. For the detail * of the element, the element to be used will be the same with * the suffix {@code _detail}. * @param parameters * the parameters, in case the string is parameterizable * @param severity * the severity of the message * @param componentId * the component id for which the message is destined. Note that * an appropriate JSF {@code <h:message for="myComponentId">} tag * is required for the to appear, or alternatively a {@code * <h:messages>} tag. */ protected void setMessage(String resourceBundle, String summaryResourceId, List<Object> parameters, Severity severity, String componentId, Messages messages) { FacesContext context = getFacesContext(); FacesMessage message = messages.getMessage(resourceBundle, summaryResourceId, parameters); if (severity != null) { message.setSeverity(severity); } context.addMessage(componentId, message); } /** * Copies a byte array to the response output stream with the appropriate * MIME type and content disposition. The response output stream is closed * after this method. * * @param response * the HTTP response * @param bytes * the data * @param filename * the suggested file name for the client * @param mimeType * the MIME type; will be overridden if the filename suggests a * different MIME type * @throws IllegalArgumentException * if the data array is <code>null</code>/empty or both filename * and mimeType are <code>null</code>/empty */ protected void printBytesToResponse(HttpServletResponse response, byte[] bytes, String filename, String mimeType) throws WebException, IllegalArgumentException { if (response.isCommitted()) { throw new WebException("HTTP response is already committed"); } if (ArrayUtils.isEmpty(bytes)) { throw new IllegalArgumentException("Data buffer is empty"); } if (StringUtils.isEmpty(filename) && StringUtils.isEmpty(mimeType)) { throw new IllegalArgumentException( "Filename and MIME type are both null/empty"); } // Set content type (mime type) String calculatedMimeType = FileUtils.getMimeType(filename); // not among the known ones String newMimeType = mimeType; if (calculatedMimeType == null) { // given mime type passed if (mimeType == null) { // none available put default mime-type newMimeType = "application/download"; } else { if ("application/octet-stream".equals(mimeType)) { // small modification newMimeType = "application/download"; } } } else { // calculated mime type has precedence over given mime type newMimeType = calculatedMimeType; } response.setContentType(newMimeType); // Set content disposition and other headers String contentDisposition = "attachment;filename=\"" + filename + "\""; response.setHeader("Content-Disposition", contentDisposition); response.setHeader("Expires", "0"); response.setHeader("Cache-Control", "max-age=30"); response.setHeader("Pragma", "public"); // Set content length response.setContentLength(bytes.length); // Write bytes to response OutputStream out = null; try { out = response.getOutputStream(); out.write(bytes); } catch (IOException e) { throw new WebException("Error writing data to HTTP response", e); } finally { try { out.close(); } catch (Exception e) { logger.error("Error closing HTTP stream", e); } } } /** * Retrieve a session-scoped managed bean. * * @param sessionBeanName * the session-scoped managed bean name * @return the session-scoped managed bean */ protected Object getSessionBean(String sessionBeanName) { Object sessionScopedBean = FacesContext.getCurrentInstance() .getExternalContext().getSessionMap().get(sessionBeanName); if (sessionScopedBean == null) { throw new IllegalArgumentException("No such object in Session"); } else { return sessionScopedBean; } } /** * Set a session-scoped managed bean * * @param sessionBeanName * the session-scoped managed bean name * @return the session-scoped managed bean */ protected boolean setSessionBean(String sessionBeanName, Object sessionBean) { Object sessionScopedBean = FacesContext.getCurrentInstance() .getExternalContext().getSessionMap().get(sessionBeanName); if (sessionScopedBean == null) { FacesContext.getCurrentInstance().getExternalContext() .getSessionMap().put(sessionBeanName, sessionBean); } else { throw new IllegalArgumentException( "This session-scoped bean was already initialized"); } return true; } /** * For testing (enables mock of FacesContext) * * @return the faces context */ public FacesContext getFacesContext() { if (facesContext == null) { return FacesContext.getCurrentInstance(); } return facesContext; } /** * For testing (enables mocking of FacesContext). * * @param aFacesContext * a - possibly mock - faces context. */ public void setFacesContext(FacesContext aFacesContext) { this.facesContext = aFacesContext; } }

    Read the article

  • MDX Studio download #mdx #ssas

    - by Marco Russo (SQLBI)
    Short version: the latest available version of MDX Studio can be downloaded from http://www.sqlbi.com/tools/mdx-studio/ Long version: Last week Stacia Misner twitted that the online version of MDX Studio was no longer available. It was hosted on http://mdx.mosha.com. It was a sad news, and it is also not good that nobody is maintaining the desktop version of MDX Studio. The latest release is the 0.4.14 and as I am writing it is still available on a SkyDrive link provided by Mosha Pasumansky, who wrote MDX Studio. Mosha does not work in Microsoft now and the entire BI community hopes that somebody will continue its work on this product. Unfortunately, it cannot be published on CodePlex because of some IP restrictions. Only bad news? Well, I hope no. The first good news is that MDX Studio also works with Analysis Services 2012 in Multidimensional mode. The second news is that, after having checked that we can do that, we created a web page on SQLBI web site to download the latest available release of MDX Studio. I hope it will be necessary to update it in the future, by now it is just a way to simplify the finding and download of this precious tool, and to grant that it will not disappear in case the current SkyDrive using to host the download would be discontinued, like it happened to the MDX Studio online version. Now a question to the BI Community: I know that there was some content available regarding tutorial on MDX Studio. I’d like to gather it and to put all in a single place. If you have such content, please contact me directly writing to marco (dot) russo (at) sqlbi [dot] com. Thanks!

    Read the article

  • Meet SQLBI at PASS Summit 2012 #sqlpass

    - by Marco Russo (SQLBI)
    Next week I and Alberto Ferrari will be in Seattle at PASS Summit 2012. You can meet us at our sessions, at a book signing and hopefully watching some other session during the conference. Here are our appointments: Thursday, November 08, 2012, 10:15 AM - 11:45 AM – Alberto Ferrari – Room 606-607 Querying and Optimizing DAX (BIA-321-S) Do you want to learn how to write DAX queries and how to optimize them? Don’t miss this session! Thursday, November 08, 2012, 12:00 PM - 12:30 PM – Bookstore Book signing event at the Bookstore corner with Alberto Ferrari, Marco Russo and Chris Webb Visit the bookstore and sign your copy of our Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model book. Thursday, November 08, 2012, 1:30 PM - 2:45 PM – Marco Russo – Room 611 Near Real-Time Analytics with xVelocity (without DirectQuery) (BIA-312) What’s the latency you can tolerate for your data? Discover what is the limit in Tabular without using DirectQuery and learn how to optimize your data model and your queries for a near real-time analytical system. Not a trivial task, but more affordable than you might think. Friday, November 09, 2012, 9:45 AM - 11:00 AM Parent-Child Hierarchies in Tabular (BIA-301) Multidimensional has a more advanced support for hierarchies than Tabular, but in reality you can do almost the same things by using data modeling, DAX functions and BIDS Helper!  Friday, November 09, 2012, 1:00 PM - 2:15 PM – Marco Russo – Room 612 Inside DAX Query Plans (BIA-403) Discover the query plan for your DAX query and learn how to read it and how to optimize a DAX query by using these information. If you meet us at the conference, stop us and say hello: it’s always nice to know our readers!

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >