Search

Peter Sassone Phones & Addresses

  • Round Rock, TX
  • Austin, TX
  • Monroe Township, NJ
  • 2151 Cumberland Rd, Atlanta, GA 30339 (770) 444-3815
  • 1947 Four Seasons Dr, Marietta, GA 30064 (770) 429-0625

Work

Company: Qualcomm Oct 2010 Address: Austin, TX Position: Microarchitect

Education

Degree: Doctor of Philosophy (PhD) School / High School: Georgia Institute of Technology 2001 to 2005 Specialities: Computer Engineering

Skills

Rtl Design • Logic Design • Processors • Verilog • Systemverilog • Microarchitecture • Digital Signal Processors • Vlsi • Computer Architecture • Microprocessors • Compilers • Power Optimization

Industries

Automotive

Resumes

Resumes

Peter Sassone Photo 1

Microarchitect And Logic Designer

View page
Location:
Austin, TX
Industry:
Automotive
Work:
Qualcomm - Austin, TX since Oct 2010
Microarchitect

Intel Jun 2005 - Oct 2010
Performance Modeler
Education:
Georgia Institute of Technology 2001 - 2005
Doctor of Philosophy (PhD), Computer Engineering
Georgia Institute of Technology 1996 - 2000
Bachelor of Science (BS), Computer Engineering
Skills:
Rtl Design
Logic Design
Processors
Verilog
Systemverilog
Microarchitecture
Digital Signal Processors
Vlsi
Computer Architecture
Microprocessors
Compilers
Power Optimization

Publications

Isbn (Books And Publications)

Cost-Benefit Analysis: A Handbook

View page
Author

Peter G. Sassone

ISBN #

0126193509

Us Patents

Efficient Bloom Filter

View page
US Patent:
20080147714, Jun 19, 2008
Filed:
Dec 19, 2006
Appl. No.:
11/642314
Inventors:
Mauricio Breternitz - Austin TX, US
Youfeng Wu - Palo Alto CA, US
Peter G. Sassone - Austin TX, US
Jeffrey P. Rupley - Round Rock TX, US
Wesley Attrot - Austin TX, US
Bryan Black - Austin TX, US
International Classification:
G06F 17/30
US Classification:
707102
Abstract:
Implementation of a Bloom filter using multiple single-ported memory slices. A control value is combined with a hashed address value such that the resultant address value has the property that one, and only one, of the k memories or slices is selected for a given input value, a, for each bank. Collisions are thereby avoided and the multiple hash accesses for a given input value, a, may be performed concurrently. Other embodiments are also described and claimed.

Scheduling A Direct Dependent Instruction

View page
US Patent:
20080244224, Oct 2, 2008
Filed:
Mar 29, 2007
Appl. No.:
11/729711
Inventors:
Peter Sassone - Austin TX, US
Jeff Rupley - Austin TX, US
Bryan Black - Austin TX, US
International Classification:
G06F 15/00
US Classification:
712 23
Abstract:
In one embodiment, the present invention includes an apparatus having an instruction selector to select an instruction, where the selector is to store a dependent indicator to indicate a direct dependent consumer instruction of a producer instruction, a decode logic coupled to the instruction selector to receive the dependent indicator when the producer instruction is selected and to generate a wakeup signal for the direct dependent consumer instruction, and wakeup logic to receive the wakeup signal and to indicate that the producer instruction has been selected. Other embodiments are described and claimed.

Hybrid Write-Through/Write-Back Cache Policy Managers, And Related Systems And Methods

View page
US Patent:
20130185511, Jul 18, 2013
Filed:
May 14, 2012
Appl. No.:
13/470643
Inventors:
Peter G. Sassone - Austin TX, US
Christopher Edward Koob - Round Rock TX, US
Dana M. Vantrease - Austin TX, US
Suresh K. Venkumahanti - Austin TX, US
Lucian Codrescu - Austin TX, US
Assignee:
QUALCOMM Incorporated - San Diego CA
International Classification:
G06F 12/08
US Classification:
711119, 711E12026
Abstract:
Embodiments disclosed in the detailed description include hybrid write-through/write-back cache policy managers, and related systems and methods. A cache write policy manager is configured to determine whether at least two caches among a plurality of parallel caches are active. If all of one or more other caches are not active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-hack cache policy. In this manner, the cache write policy manager may conserve power and/or increase performance of a singly active processor core. If any of the one or more other caches are active, the cache write policy manager is configured to instruct an active cache among the parallel caches to apply a write-through cache policy. In this manner, the cache write policy manager facilitates data coherency among the parallel caches when multiple processor cores are active.

Utilizing Negative Feedback From Unexpected Miss Addresses In A Hardware Prefetcher

View page
US Patent:
20130185515, Jul 18, 2013
Filed:
Jan 16, 2012
Appl. No.:
13/350909
Inventors:
Peter G. Sassone - Austin TX, US
Suman Mamidi - Austin TX, US
Elizabeth Abraham - Austin TX, US
Suresh K. Venkumahanti - Austin TX, US
Lucian Codrescu - Austin TX, US
Assignee:
QUALCOMM INCORPORATED - San Diego CA
International Classification:
G06F 12/08
US Classification:
711137, 711E12057
Abstract:
Systems and methods for populating a cache using a hardware prefetcher are disclosed. A method for prefetching cache entries includes determining an initial stride value based on at least a first and second demand miss address in the cache, verifying the initial stride value based on a third demand miss address in the cache, prefetching a predetermined number of cache entries based on the verified initial stride value, determining an expected next miss address in the cache based on the verified initial stride value and addresses of the prefetched cache entries; and confirming the verified initial stride value based on comparing the expected next miss address to a next demand miss address in the cache. If the verified initial stride value is confirmed, additional cache entries are prefetched. If the verified initial stride value is not confirmed, further prefetching is stalled and an alternate stride value is determined.

Use Of Loop And Addressing Mode Instruction Set Semantics To Direct Hardware Prefetching

View page
US Patent:
20130185516, Jul 18, 2013
Filed:
Jan 16, 2012
Appl. No.:
13/350914
Inventors:
Peter G. Sassone - Austin TX, US
Suman Mamidi - Austin TX, US
Elizabeth Abraham - Austin TX, US
Suresh K. Venkumahanti - Austin TX, US
Lucian Codrescu - Austin TX, US
Assignee:
QUALCOMM Incorporated - San Diego CA
International Classification:
G06F 12/12
US Classification:
711137, 711E12004
Abstract:
Systems and methods for prefetching cache lines into a cache coupled to a processor. A hardware prefetcher is configured to recognize a memory access instruction as an auto-increment-address (AIA) memory access instruction, infer a stride value from an increment field of the AIA instruction, and prefetch lines into the cache based on the stride value. Additionally or alternatively, the hardware prefetcher is configured to recognize that prefetched cache lines are part of a hardware loop, determine a maximum loop count of the hardware loop, and a remaining loop count as a difference between the maximum loop count and a number of loop iterations that have been completed, select a number of cache lines to prefetch, and truncate an actual number of cache lines to prefetch to be less than or equal to the remaining loop count, when the remaining loop count is less than the selected number of cache lines.

Multiple Clustered Very Long Instruction Word Processing Core

View page
US Patent:
20160062770, Mar 3, 2016
Filed:
Aug 29, 2014
Appl. No.:
14/473947
Inventors:
- San Diego CA, US
Ankit Ghiya - Austin TX, US
Peter Gene Sassone - Austin TX, US
Lucian Codrescu - Austin TX, US
Suman Mamidi - Austin TX, US
International Classification:
G06F 9/38
Abstract:
A method includes identifying, at a scheduling unit, a resource conflict at a shared processing resource that is accessible by a first processing cluster and by a second processing cluster, where the first processing cluster, the second processing cluster, and the shared processing resource are included in a very long instruction word (VLIW) processing unit. The method also includes resolving the resource conflict.

Latency-Based Power Mode Units For Controlling Power Modes Of Processor Cores, And Related Methods And Systems

View page
US Patent:
20150301573, Oct 22, 2015
Filed:
Apr 22, 2014
Appl. No.:
14/258541
Inventors:
- San Diego CA, US
Peter Gene Sassone - Austin TX, US
Sanjay Bhagawan Patil - Austin TX, US
Assignee:
QUALCOMM Incorporated - San Diego CA
International Classification:
G06F 1/32
Abstract:
Latency-based power mode units for controlling power modes of processor cores, and related methods and systems are disclosed. In one aspect, the power mode units are configured to reduce power provided to the processor core when the processor core has one or more threads in pending status and no threads in active status. An operand of an instruction being processed by a thread may be data in memory located outside processor core. If the processor core does not require as much power to operate while a thread waits for a request from outside the processor core, the power consumed by the processor core can be reduced during these waiting periods. Power can be conserved in the processor core even when threads are being processed if the only threads being processed are in pending status, and can reduce the overall power consumption in the processor core and its corresponding CPU.

Instruction Boundary Prediction For Variable Length Instruction Set

View page
US Patent:
20140281246, Sep 18, 2014
Filed:
Mar 15, 2013
Appl. No.:
13/836374
Inventors:
Mauricio Breternitz, JR. - Austin TX, US
Youfeng Wu - Palo Alto CA, US
Peter Sassone - Austin TX, US
James Mason - Austin TX, US
Aashish Phansalkar - Austin TX, US
Balaji Vijayan - Austin TX, US
International Classification:
G06F 12/08
US Classification:
711125
Abstract:
A system, processor, and method to predict with high accuracy and retain instruction boundaries for previously executed instructions in order to decode variable length instructions is disclosed. In at least one embodiment, a disclosed processor includes an instruction fetch unit, an instruction cache, a boundary byte predictor, and an instruction decoder. In some embodiments, the instruction fetch unit provides an instruction address and the instruction cache produces an instruction tag and instruction cache content corresponding to the instruction address. The instruction decoder, in some embodiments, includes boundary byte logic to determine an instruction boundary in the instruction cache content.
Peter G Sassone from Round Rock, TXDeceased Get Report